Sep 30 09:46:25 crc systemd[1]: Starting Kubernetes Kubelet... Sep 30 09:46:25 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:25 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Sep 30 09:46:26 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Sep 30 09:46:27 crc kubenswrapper[4970]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 09:46:27 crc kubenswrapper[4970]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Sep 30 09:46:27 crc kubenswrapper[4970]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 09:46:27 crc kubenswrapper[4970]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 09:46:27 crc kubenswrapper[4970]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 30 09:46:27 crc kubenswrapper[4970]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.390033 4970 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399083 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399114 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399124 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399132 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399140 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399150 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399159 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399169 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399178 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399187 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399194 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399203 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399210 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399218 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399225 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399235 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399244 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399252 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399260 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399267 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399275 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399283 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399293 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399304 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399312 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399321 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399330 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399341 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399350 4970 feature_gate.go:330] unrecognized feature gate: Example Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399359 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399367 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399375 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399383 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399390 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399397 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399405 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399413 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399421 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399429 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399437 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399444 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399452 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399460 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399467 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399478 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399486 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399495 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399503 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399511 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399518 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399526 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399533 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399543 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399556 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399565 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399574 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399583 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399592 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399600 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399624 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399643 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399656 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399665 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399677 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399685 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399692 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399714 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399724 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399734 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399743 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.399768 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.399940 4970 flags.go:64] FLAG: --address="0.0.0.0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.399959 4970 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.399974 4970 flags.go:64] FLAG: --anonymous-auth="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400019 4970 flags.go:64] FLAG: --application-metrics-count-limit="100" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400033 4970 flags.go:64] FLAG: --authentication-token-webhook="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400042 4970 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400053 4970 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400063 4970 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400073 4970 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400082 4970 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400091 4970 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400101 4970 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400110 4970 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400119 4970 flags.go:64] FLAG: --cgroup-root="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400128 4970 flags.go:64] FLAG: --cgroups-per-qos="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400137 4970 flags.go:64] FLAG: --client-ca-file="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400145 4970 flags.go:64] FLAG: --cloud-config="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400154 4970 flags.go:64] FLAG: --cloud-provider="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400162 4970 flags.go:64] FLAG: --cluster-dns="[]" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400173 4970 flags.go:64] FLAG: --cluster-domain="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400181 4970 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400190 4970 flags.go:64] FLAG: --config-dir="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400198 4970 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400208 4970 flags.go:64] FLAG: --container-log-max-files="5" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400221 4970 flags.go:64] FLAG: --container-log-max-size="10Mi" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400230 4970 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400239 4970 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400248 4970 flags.go:64] FLAG: --containerd-namespace="k8s.io" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400257 4970 flags.go:64] FLAG: --contention-profiling="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400266 4970 flags.go:64] FLAG: --cpu-cfs-quota="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400275 4970 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400285 4970 flags.go:64] FLAG: --cpu-manager-policy="none" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400295 4970 flags.go:64] FLAG: --cpu-manager-policy-options="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400306 4970 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400315 4970 flags.go:64] FLAG: --enable-controller-attach-detach="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400324 4970 flags.go:64] FLAG: --enable-debugging-handlers="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400333 4970 flags.go:64] FLAG: --enable-load-reader="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400342 4970 flags.go:64] FLAG: --enable-server="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400351 4970 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400363 4970 flags.go:64] FLAG: --event-burst="100" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400373 4970 flags.go:64] FLAG: --event-qps="50" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400382 4970 flags.go:64] FLAG: --event-storage-age-limit="default=0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400391 4970 flags.go:64] FLAG: --event-storage-event-limit="default=0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400399 4970 flags.go:64] FLAG: --eviction-hard="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400416 4970 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400425 4970 flags.go:64] FLAG: --eviction-minimum-reclaim="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400434 4970 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400443 4970 flags.go:64] FLAG: --eviction-soft="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400452 4970 flags.go:64] FLAG: --eviction-soft-grace-period="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400460 4970 flags.go:64] FLAG: --exit-on-lock-contention="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400469 4970 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400478 4970 flags.go:64] FLAG: --experimental-mounter-path="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400487 4970 flags.go:64] FLAG: --fail-cgroupv1="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400496 4970 flags.go:64] FLAG: --fail-swap-on="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400504 4970 flags.go:64] FLAG: --feature-gates="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400515 4970 flags.go:64] FLAG: --file-check-frequency="20s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400524 4970 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400532 4970 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400541 4970 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400551 4970 flags.go:64] FLAG: --healthz-port="10248" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400561 4970 flags.go:64] FLAG: --help="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400570 4970 flags.go:64] FLAG: --hostname-override="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400578 4970 flags.go:64] FLAG: --housekeeping-interval="10s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400588 4970 flags.go:64] FLAG: --http-check-frequency="20s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400597 4970 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400606 4970 flags.go:64] FLAG: --image-credential-provider-config="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400616 4970 flags.go:64] FLAG: --image-gc-high-threshold="85" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400625 4970 flags.go:64] FLAG: --image-gc-low-threshold="80" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400633 4970 flags.go:64] FLAG: --image-service-endpoint="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400642 4970 flags.go:64] FLAG: --kernel-memcg-notification="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400651 4970 flags.go:64] FLAG: --kube-api-burst="100" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400660 4970 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400671 4970 flags.go:64] FLAG: --kube-api-qps="50" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400680 4970 flags.go:64] FLAG: --kube-reserved="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400690 4970 flags.go:64] FLAG: --kube-reserved-cgroup="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400699 4970 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400711 4970 flags.go:64] FLAG: --kubelet-cgroups="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400720 4970 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400729 4970 flags.go:64] FLAG: --lock-file="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400738 4970 flags.go:64] FLAG: --log-cadvisor-usage="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400747 4970 flags.go:64] FLAG: --log-flush-frequency="5s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400756 4970 flags.go:64] FLAG: --log-json-info-buffer-size="0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400769 4970 flags.go:64] FLAG: --log-json-split-stream="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400777 4970 flags.go:64] FLAG: --log-text-info-buffer-size="0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400786 4970 flags.go:64] FLAG: --log-text-split-stream="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400795 4970 flags.go:64] FLAG: --logging-format="text" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400827 4970 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400838 4970 flags.go:64] FLAG: --make-iptables-util-chains="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400847 4970 flags.go:64] FLAG: --manifest-url="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400856 4970 flags.go:64] FLAG: --manifest-url-header="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400867 4970 flags.go:64] FLAG: --max-housekeeping-interval="15s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400877 4970 flags.go:64] FLAG: --max-open-files="1000000" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400887 4970 flags.go:64] FLAG: --max-pods="110" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400896 4970 flags.go:64] FLAG: --maximum-dead-containers="-1" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400905 4970 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400915 4970 flags.go:64] FLAG: --memory-manager-policy="None" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400924 4970 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400933 4970 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400942 4970 flags.go:64] FLAG: --node-ip="192.168.126.11" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400951 4970 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400969 4970 flags.go:64] FLAG: --node-status-max-images="50" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.400978 4970 flags.go:64] FLAG: --node-status-update-frequency="10s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401013 4970 flags.go:64] FLAG: --oom-score-adj="-999" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401023 4970 flags.go:64] FLAG: --pod-cidr="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401031 4970 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401047 4970 flags.go:64] FLAG: --pod-manifest-path="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401059 4970 flags.go:64] FLAG: --pod-max-pids="-1" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401070 4970 flags.go:64] FLAG: --pods-per-core="0" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401091 4970 flags.go:64] FLAG: --port="10250" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401109 4970 flags.go:64] FLAG: --protect-kernel-defaults="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401121 4970 flags.go:64] FLAG: --provider-id="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401132 4970 flags.go:64] FLAG: --qos-reserved="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401148 4970 flags.go:64] FLAG: --read-only-port="10255" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401161 4970 flags.go:64] FLAG: --register-node="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401173 4970 flags.go:64] FLAG: --register-schedulable="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401182 4970 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401198 4970 flags.go:64] FLAG: --registry-burst="10" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401208 4970 flags.go:64] FLAG: --registry-qps="5" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401217 4970 flags.go:64] FLAG: --reserved-cpus="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401225 4970 flags.go:64] FLAG: --reserved-memory="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401236 4970 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401245 4970 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401254 4970 flags.go:64] FLAG: --rotate-certificates="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401263 4970 flags.go:64] FLAG: --rotate-server-certificates="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401272 4970 flags.go:64] FLAG: --runonce="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401280 4970 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401289 4970 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401298 4970 flags.go:64] FLAG: --seccomp-default="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401308 4970 flags.go:64] FLAG: --serialize-image-pulls="true" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401317 4970 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401327 4970 flags.go:64] FLAG: --storage-driver-db="cadvisor" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401336 4970 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401344 4970 flags.go:64] FLAG: --storage-driver-password="root" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401353 4970 flags.go:64] FLAG: --storage-driver-secure="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401362 4970 flags.go:64] FLAG: --storage-driver-table="stats" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401371 4970 flags.go:64] FLAG: --storage-driver-user="root" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401379 4970 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401390 4970 flags.go:64] FLAG: --sync-frequency="1m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401401 4970 flags.go:64] FLAG: --system-cgroups="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401412 4970 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401431 4970 flags.go:64] FLAG: --system-reserved-cgroup="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401442 4970 flags.go:64] FLAG: --tls-cert-file="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401453 4970 flags.go:64] FLAG: --tls-cipher-suites="[]" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401467 4970 flags.go:64] FLAG: --tls-min-version="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401477 4970 flags.go:64] FLAG: --tls-private-key-file="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401487 4970 flags.go:64] FLAG: --topology-manager-policy="none" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401495 4970 flags.go:64] FLAG: --topology-manager-policy-options="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401505 4970 flags.go:64] FLAG: --topology-manager-scope="container" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401515 4970 flags.go:64] FLAG: --v="2" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401526 4970 flags.go:64] FLAG: --version="false" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401537 4970 flags.go:64] FLAG: --vmodule="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401548 4970 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.401557 4970 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401764 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401774 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401783 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401791 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401799 4970 feature_gate.go:330] unrecognized feature gate: Example Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401807 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401816 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401824 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401833 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401842 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401850 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401858 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401865 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401873 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401881 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401889 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401897 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401904 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401912 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401920 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401930 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401940 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401949 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401959 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401974 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.401983 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402026 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402035 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402043 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402051 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402058 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402067 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402075 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402082 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402090 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402097 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402109 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402117 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402125 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402132 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402140 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402148 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402156 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402163 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402173 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402182 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402190 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402198 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402205 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402213 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402220 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402228 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402236 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402244 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402251 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402259 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402269 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402276 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402284 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402292 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402299 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402306 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402315 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402325 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402335 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402344 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402354 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402364 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402382 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402395 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.402407 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.402433 4970 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.416431 4970 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.416480 4970 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416608 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416622 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416632 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416642 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416650 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416659 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416667 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416676 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416684 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416695 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416708 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416717 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416727 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416735 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416744 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416752 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416761 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416770 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416778 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416787 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416795 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416804 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416812 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416821 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416830 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416839 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416847 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416857 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416868 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416878 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416888 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416896 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416904 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416912 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416921 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416930 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416938 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416945 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416952 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416960 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416968 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416975 4970 feature_gate.go:330] unrecognized feature gate: Example Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.416983 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417017 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417025 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417032 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417040 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417048 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417055 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417063 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417079 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417086 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417094 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417101 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417109 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417119 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417126 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417137 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417154 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417163 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417171 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417179 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417186 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417201 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417209 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417217 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417224 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417232 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417240 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417248 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417256 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.417270 4970 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417514 4970 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417528 4970 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417537 4970 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417546 4970 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417556 4970 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417564 4970 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417574 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417583 4970 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417593 4970 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417605 4970 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417614 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417622 4970 feature_gate.go:330] unrecognized feature gate: GatewayAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417632 4970 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417640 4970 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417648 4970 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417656 4970 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417664 4970 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417672 4970 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417681 4970 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417688 4970 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417696 4970 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417704 4970 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417713 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417723 4970 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417733 4970 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417742 4970 feature_gate.go:330] unrecognized feature gate: OVNObservability Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417752 4970 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417761 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417771 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417781 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417791 4970 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417800 4970 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417810 4970 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417820 4970 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417831 4970 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417839 4970 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417846 4970 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417854 4970 feature_gate.go:330] unrecognized feature gate: NewOLM Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417862 4970 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417872 4970 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417882 4970 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417891 4970 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417901 4970 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417914 4970 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417926 4970 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417938 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417952 4970 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417964 4970 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417973 4970 feature_gate.go:330] unrecognized feature gate: SignatureStores Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.417983 4970 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418027 4970 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418038 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418048 4970 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418057 4970 feature_gate.go:330] unrecognized feature gate: Example Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418070 4970 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418082 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418092 4970 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418101 4970 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418109 4970 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418119 4970 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418127 4970 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418135 4970 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418143 4970 feature_gate.go:330] unrecognized feature gate: PlatformOperators Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418152 4970 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418160 4970 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418168 4970 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418175 4970 feature_gate.go:330] unrecognized feature gate: PinnedImages Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418183 4970 feature_gate.go:330] unrecognized feature gate: InsightsConfig Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418190 4970 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418198 4970 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.418208 4970 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.418220 4970 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.419802 4970 server.go:940] "Client rotation is on, will bootstrap in background" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.426476 4970 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.426644 4970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.430105 4970 server.go:997] "Starting client certificate rotation" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.430164 4970 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.431193 4970 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-16 08:25:40.633539719 +0000 UTC Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.431699 4970 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 2590h39m13.201846279s for next certificate rotation Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.462643 4970 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.467413 4970 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.489016 4970 log.go:25] "Validated CRI v1 runtime API" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.529641 4970 log.go:25] "Validated CRI v1 image API" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.534104 4970 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.540585 4970 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-09-30-09-41-39-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.540641 4970 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.573567 4970 manager.go:217] Machine: {Timestamp:2025-09-30 09:46:27.57013541 +0000 UTC m=+0.641986424 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:2e6e3b7a-0e45-4517-abb1-931732be7041 BootID:686f6a6c-33dd-428d-95f2-c1d9edb8dca6 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:aa:43:fd Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:aa:43:fd Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:73:12:57 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:46:bf:df Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:43:d9:e4 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:66:7a:c4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0a:e5:4f:0f:0b:bb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:0e:17:97:36:6d:86 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.573844 4970 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.574019 4970 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.576203 4970 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.576453 4970 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.576503 4970 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.576733 4970 topology_manager.go:138] "Creating topology manager with none policy" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.576745 4970 container_manager_linux.go:303] "Creating device plugin manager" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.577257 4970 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.577291 4970 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.577495 4970 state_mem.go:36] "Initialized new in-memory state store" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.577592 4970 server.go:1245] "Using root directory" path="/var/lib/kubelet" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.583165 4970 kubelet.go:418] "Attempting to sync node with API server" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.583193 4970 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.583227 4970 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.583244 4970 kubelet.go:324] "Adding apiserver pod source" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.583261 4970 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.587596 4970 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.588608 4970 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.590444 4970 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.591503 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.591502 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.591691 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.591619 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.591941 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.591967 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.591976 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592010 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592028 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592037 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592047 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592063 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592074 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592084 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592113 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.592122 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.593117 4970 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.593620 4970 server.go:1280] "Started kubelet" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.597295 4970 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.597370 4970 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 30 09:46:27 crc systemd[1]: Started Kubernetes Kubelet. Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.599309 4970 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.599769 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.602349 4970 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.602594 4970 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.602822 4970 volume_manager.go:287] "The desired_state_of_world populator starts" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.602850 4970 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.602931 4970 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.602675 4970 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:55:28.977791998 +0000 UTC Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.603161 4970 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 2237h9m1.374637444s for next certificate rotation Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.603189 4970 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.603701 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="200ms" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.608515 4970 server.go:460] "Adding debug handlers to kubelet server" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.608550 4970 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.132:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186a0659e44b2fa5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-09-30 09:46:27.593588645 +0000 UTC m=+0.665439589,LastTimestamp:2025-09-30 09:46:27.593588645 +0000 UTC m=+0.665439589,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.610553 4970 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.610592 4970 factory.go:55] Registering systemd factory Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.610610 4970 factory.go:221] Registration of the systemd container factory successfully Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.611544 4970 factory.go:153] Registering CRI-O factory Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.611597 4970 factory.go:221] Registration of the crio container factory successfully Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.611643 4970 factory.go:103] Registering Raw factory Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.611719 4970 manager.go:1196] Started watching for new ooms in manager Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.611572 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.612484 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.614843 4970 manager.go:319] Starting recovery of all containers Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617658 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617709 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617723 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617737 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617748 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617761 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617774 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617786 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617801 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617812 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.617824 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620397 4970 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620450 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620472 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620493 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620508 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620545 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620560 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620575 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620592 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620605 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620619 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620632 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620648 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620662 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620688 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620705 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620722 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620739 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620755 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620769 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620782 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620797 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620810 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620824 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620836 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620849 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620863 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620876 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620891 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620905 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620917 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620930 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620943 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620957 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620969 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.620980 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621010 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621025 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621040 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621053 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621066 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621080 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621151 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621168 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621183 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621196 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621212 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621226 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621241 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621254 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621268 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621281 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621295 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621336 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621352 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621366 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621379 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621392 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621404 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621416 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621430 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621443 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621457 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621470 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621483 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621496 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621509 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621521 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621534 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621547 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621560 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621574 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621587 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621599 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621613 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621625 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621639 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621652 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621664 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621677 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621690 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621704 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621717 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621730 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621744 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621758 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621771 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621784 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621796 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621808 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621822 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621835 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621847 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621863 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621884 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621899 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621914 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621928 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621942 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621956 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.621971 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622003 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622019 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622033 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622048 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622062 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622075 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622089 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622103 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622116 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622129 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622141 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622156 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622170 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622187 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622201 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622214 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622229 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622242 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622256 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622269 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622281 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622295 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622309 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622322 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622337 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622350 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622363 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622376 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622391 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622404 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622417 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622430 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622443 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622457 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622470 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622482 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622495 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622512 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622528 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622544 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622572 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622589 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622607 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622622 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622635 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622648 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622662 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622676 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622690 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622703 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622717 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622733 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622762 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622782 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622799 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622814 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622829 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622845 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622863 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622876 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622887 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622899 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622923 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622936 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622949 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622962 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.622976 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623009 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623023 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623034 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623046 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623060 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623072 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623085 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623100 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623113 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623125 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623137 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623149 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623163 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623175 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623188 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623199 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623241 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623255 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623267 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623280 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623291 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623303 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623315 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623327 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623339 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623350 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623361 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623373 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623387 4970 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623399 4970 reconstruct.go:97] "Volume reconstruction finished" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.623408 4970 reconciler.go:26] "Reconciler: start to sync state" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.644966 4970 manager.go:324] Recovery completed Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.658769 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.660802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.660842 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.660858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.661860 4970 cpu_manager.go:225] "Starting CPU manager" policy="none" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.661880 4970 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.661901 4970 state_mem.go:36] "Initialized new in-memory state store" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.663305 4970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.667096 4970 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.667152 4970 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.667184 4970 kubelet.go:2335] "Starting kubelet main sync loop" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.667293 4970 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 30 09:46:27 crc kubenswrapper[4970]: W0930 09:46:27.668219 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.668286 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.688246 4970 policy_none.go:49] "None policy: Start" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.689319 4970 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.689375 4970 state_mem.go:35] "Initializing new in-memory state store" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.703952 4970 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.742906 4970 manager.go:334] "Starting Device Plugin manager" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.743236 4970 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.743266 4970 server.go:79] "Starting device plugin registration server" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.743867 4970 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.743896 4970 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.744158 4970 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.744304 4970 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.744326 4970 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.755938 4970 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.767530 4970 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.767628 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.768740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.768792 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.768805 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.768940 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.769139 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.769202 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.769763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.769801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.769811 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.769918 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.770090 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.770131 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.770605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.770625 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.770635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.770722 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771070 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771106 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771178 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771213 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771611 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771629 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.771772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.772022 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.772183 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.772239 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.773146 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.773180 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.773208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.773886 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.773952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.773970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.774188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.774217 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.774240 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.774443 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.774490 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.776443 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.776470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.776483 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.805135 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="400ms" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827329 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827399 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827435 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827475 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827593 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827661 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827697 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827725 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827751 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827822 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827908 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.827970 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.828110 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.828167 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.828193 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.844444 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.845699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.845754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.845775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.845813 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:27 crc kubenswrapper[4970]: E0930 09:46:27.846445 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929207 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929257 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929283 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929300 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929319 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929336 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929351 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929393 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929414 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929406 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929447 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929433 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929553 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929553 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929559 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929601 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929545 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929571 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929646 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929631 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929479 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929685 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929726 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929613 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929579 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929762 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929711 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Sep 30 09:46:27 crc kubenswrapper[4970]: I0930 09:46:27.929847 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.047413 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.049627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.049692 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.049713 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.049751 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.050444 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.107642 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.119639 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.145391 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.153262 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.158526 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.169961 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-bdd10b3bcdb71750e9e8cbb3ddf3d37f37b99e5cf55e273213ba3b4a4cd70732 WatchSource:0}: Error finding container bdd10b3bcdb71750e9e8cbb3ddf3d37f37b99e5cf55e273213ba3b4a4cd70732: Status 404 returned error can't find the container with id bdd10b3bcdb71750e9e8cbb3ddf3d37f37b99e5cf55e273213ba3b4a4cd70732 Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.172346 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-787807c92a557f2f213c520fa6d92378ba7c30cc608169ec89dbe59b9fb640dc WatchSource:0}: Error finding container 787807c92a557f2f213c520fa6d92378ba7c30cc608169ec89dbe59b9fb640dc: Status 404 returned error can't find the container with id 787807c92a557f2f213c520fa6d92378ba7c30cc608169ec89dbe59b9fb640dc Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.187107 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-6ff7b713441d4a415d2685f3c29e15ec90410ba17b26494895c8e7ceb926c9e2 WatchSource:0}: Error finding container 6ff7b713441d4a415d2685f3c29e15ec90410ba17b26494895c8e7ceb926c9e2: Status 404 returned error can't find the container with id 6ff7b713441d4a415d2685f3c29e15ec90410ba17b26494895c8e7ceb926c9e2 Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.190907 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c165a60f4bf2965767f5aeccb12709354e579f38305408d17ee596a062429f26 WatchSource:0}: Error finding container c165a60f4bf2965767f5aeccb12709354e579f38305408d17ee596a062429f26: Status 404 returned error can't find the container with id c165a60f4bf2965767f5aeccb12709354e579f38305408d17ee596a062429f26 Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.194975 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6d99421e47ab1155c2ce0d14e5ef8fa2cd2d950004710150e84c022f6f7d6220 WatchSource:0}: Error finding container 6d99421e47ab1155c2ce0d14e5ef8fa2cd2d950004710150e84c022f6f7d6220: Status 404 returned error can't find the container with id 6d99421e47ab1155c2ce0d14e5ef8fa2cd2d950004710150e84c022f6f7d6220 Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.206631 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="800ms" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.451407 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.453722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.453785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.453803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.453837 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.454539 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.487900 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.488084 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.542719 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.542814 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.601420 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.672278 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6ff7b713441d4a415d2685f3c29e15ec90410ba17b26494895c8e7ceb926c9e2"} Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.673574 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"bdd10b3bcdb71750e9e8cbb3ddf3d37f37b99e5cf55e273213ba3b4a4cd70732"} Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.674705 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"787807c92a557f2f213c520fa6d92378ba7c30cc608169ec89dbe59b9fb640dc"} Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.676455 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6d99421e47ab1155c2ce0d14e5ef8fa2cd2d950004710150e84c022f6f7d6220"} Sep 30 09:46:28 crc kubenswrapper[4970]: I0930 09:46:28.678486 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c165a60f4bf2965767f5aeccb12709354e579f38305408d17ee596a062429f26"} Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.703859 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.703983 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:28 crc kubenswrapper[4970]: W0930 09:46:28.800488 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:28 crc kubenswrapper[4970]: E0930 09:46:28.800618 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:29 crc kubenswrapper[4970]: E0930 09:46:29.007614 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="1.6s" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.254797 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.256750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.256789 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.256800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.256827 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:29 crc kubenswrapper[4970]: E0930 09:46:29.257238 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.601236 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.682481 4970 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9" exitCode=0 Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.682528 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.682582 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.683399 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.683456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.683471 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.686996 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.687019 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.687030 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.689175 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db" exitCode=0 Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.689210 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.689306 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.690065 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.690087 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.690098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.690971 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1" exitCode=0 Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.691034 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.691100 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.691490 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.691913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.691949 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.691961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.692534 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.692557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.692566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.693724 4970 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="6f489cb83c02425534a42ee61a9061dc94bb10a74e4863101b9a1532607746ed" exitCode=0 Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.693757 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"6f489cb83c02425534a42ee61a9061dc94bb10a74e4863101b9a1532607746ed"} Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.693812 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.701865 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.701915 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:29 crc kubenswrapper[4970]: I0930 09:46:29.701925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.600721 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:30 crc kubenswrapper[4970]: E0930 09:46:30.608423 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.132:6443: connect: connection refused" interval="3.2s" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.697626 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.697745 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.699571 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.699596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.699604 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.702065 4970 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bda44e8410e4ecf373008724ba9f4d1c3bd0d5b808975c234ee5cac0fcc89c96" exitCode=0 Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.702142 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bda44e8410e4ecf373008724ba9f4d1c3bd0d5b808975c234ee5cac0fcc89c96"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.702171 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.702952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.702999 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.703013 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.705832 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.705872 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.705889 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.705901 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.707136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.707192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.707218 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.715867 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.716266 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.716914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.716954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.716967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.719726 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.719750 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.719758 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.719767 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419"} Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.857372 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.858401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.858446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.858468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:30 crc kubenswrapper[4970]: I0930 09:46:30.858495 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:30 crc kubenswrapper[4970]: E0930 09:46:30.859002 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.132:6443: connect: connection refused" node="crc" Sep 30 09:46:31 crc kubenswrapper[4970]: W0930 09:46:31.414272 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.132:6443: connect: connection refused Sep 30 09:46:31 crc kubenswrapper[4970]: E0930 09:46:31.414939 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.132:6443: connect: connection refused" logger="UnhandledError" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.726472 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739"} Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.727603 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729079 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729046 4970 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="409187cbaefbc2babe82c217518aea969a092b2f1a44835c7806eb562ce71244" exitCode=0 Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729070 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"409187cbaefbc2babe82c217518aea969a092b2f1a44835c7806eb562ce71244"} Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729192 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729230 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729406 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729865 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.729885 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.730346 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.730384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.730402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.730689 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.730711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.730722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.731384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.731410 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.731420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.732085 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.732121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:31 crc kubenswrapper[4970]: I0930 09:46:31.732136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.739428 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"60a686fd2cce666cb4e37040cd32d15bcd4febf05abf56ec356f89741d62d4ad"} Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.739534 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"accbbd9f81b2b1d1e30db2f88e9df127ceb7c89aeb8ebcf5fec4421bb1c7126d"} Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.739578 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5a363b723273b7e0114c1af5c751fefffb270be73110e45d17a55e57fa336f80"} Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.739605 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"93d315ce90d883fc19991afae7269552466f8f1f202510d05bd16b3edff2c43f"} Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.739458 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.739694 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.741295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.741352 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:32 crc kubenswrapper[4970]: I0930 09:46:32.741364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.453641 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.746005 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"40d13bf6194239829062ad66b2d2fb349f4b5ff8a12459d68d9e2a452b1a093c"} Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.746064 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.746115 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.746116 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.747418 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.747432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.747464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.747480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.747445 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:33 crc kubenswrapper[4970]: I0930 09:46:33.747557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.059290 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.061036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.061084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.061097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.061123 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.176751 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.720815 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.721120 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.722387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.722466 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.722487 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.748883 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.750407 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.750448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.750456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.970742 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.971083 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.972788 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.972851 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:34 crc kubenswrapper[4970]: I0930 09:46:34.972866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:35 crc kubenswrapper[4970]: I0930 09:46:35.558826 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Sep 30 09:46:35 crc kubenswrapper[4970]: I0930 09:46:35.750895 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:35 crc kubenswrapper[4970]: I0930 09:46:35.751686 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:35 crc kubenswrapper[4970]: I0930 09:46:35.751719 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:35 crc kubenswrapper[4970]: I0930 09:46:35.751726 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.377835 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.378083 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.378136 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.379481 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.379533 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.379551 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.754252 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.755832 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.755889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:36 crc kubenswrapper[4970]: I0930 09:46:36.755905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.721704 4970 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.721798 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 09:46:37 crc kubenswrapper[4970]: E0930 09:46:37.756328 4970 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.937709 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.938034 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.939637 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.939672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:37 crc kubenswrapper[4970]: I0930 09:46:37.939682 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.116657 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.117701 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.121350 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.121409 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.121431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.206591 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.206911 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.208764 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.208859 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.208888 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.217191 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.759913 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.760097 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.761806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.761849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:38 crc kubenswrapper[4970]: I0930 09:46:38.761861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:39 crc kubenswrapper[4970]: I0930 09:46:39.761911 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:39 crc kubenswrapper[4970]: I0930 09:46:39.762860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:39 crc kubenswrapper[4970]: I0930 09:46:39.762913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:39 crc kubenswrapper[4970]: I0930 09:46:39.762935 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:39 crc kubenswrapper[4970]: I0930 09:46:39.770234 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:40 crc kubenswrapper[4970]: I0930 09:46:40.764809 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:40 crc kubenswrapper[4970]: I0930 09:46:40.766214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:40 crc kubenswrapper[4970]: I0930 09:46:40.766259 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:40 crc kubenswrapper[4970]: I0930 09:46:40.766271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:41 crc kubenswrapper[4970]: W0930 09:46:41.571563 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.571689 4970 trace.go:236] Trace[1879827562]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 09:46:31.569) (total time: 10001ms): Sep 30 09:46:41 crc kubenswrapper[4970]: Trace[1879827562]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:46:41.571) Sep 30 09:46:41 crc kubenswrapper[4970]: Trace[1879827562]: [10.001777379s] [10.001777379s] END Sep 30 09:46:41 crc kubenswrapper[4970]: E0930 09:46:41.571722 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.601981 4970 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Sep 30 09:46:41 crc kubenswrapper[4970]: W0930 09:46:41.631670 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.631756 4970 trace.go:236] Trace[1662024659]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 09:46:31.630) (total time: 10001ms): Sep 30 09:46:41 crc kubenswrapper[4970]: Trace[1662024659]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:46:41.631) Sep 30 09:46:41 crc kubenswrapper[4970]: Trace[1662024659]: [10.001339967s] [10.001339967s] END Sep 30 09:46:41 crc kubenswrapper[4970]: E0930 09:46:41.631780 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.719222 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48240->192.168.126.11:17697: read: connection reset by peer" start-of-body= Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.719280 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48240->192.168.126.11:17697: read: connection reset by peer" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.769351 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.771066 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739" exitCode=255 Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.771131 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739"} Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.771376 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.772816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.772882 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.772901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.773804 4970 scope.go:117] "RemoveContainer" containerID="3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739" Sep 30 09:46:41 crc kubenswrapper[4970]: W0930 09:46:41.859238 4970 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Sep 30 09:46:41 crc kubenswrapper[4970]: I0930 09:46:41.859387 4970 trace.go:236] Trace[846009344]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 09:46:31.857) (total time: 10001ms): Sep 30 09:46:41 crc kubenswrapper[4970]: Trace[846009344]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:46:41.859) Sep 30 09:46:41 crc kubenswrapper[4970]: Trace[846009344]: [10.001486331s] [10.001486331s] END Sep 30 09:46:41 crc kubenswrapper[4970]: E0930 09:46:41.859416 4970 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.777285 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.779559 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1"} Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.779743 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.780721 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.780783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.780800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.881400 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.881486 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.886875 4970 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Sep 30 09:46:42 crc kubenswrapper[4970]: I0930 09:46:42.886958 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.206681 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.206819 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.207881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.207913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.207923 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.216645 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.785903 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.787242 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.787307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:44 crc kubenswrapper[4970]: I0930 09:46:44.787326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.240545 4970 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.386656 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.386819 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.386975 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.388488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.388524 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.388547 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.391951 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.790863 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.791859 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.791925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:46 crc kubenswrapper[4970]: I0930 09:46:46.791936 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.386972 4970 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.722051 4970 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.722189 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 30 09:46:47 crc kubenswrapper[4970]: E0930 09:46:47.756569 4970 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.793631 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.794938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.795014 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.795026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:47 crc kubenswrapper[4970]: E0930 09:46:47.877740 4970 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.879623 4970 trace.go:236] Trace[1143486535]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Sep-2025 09:46:37.390) (total time: 10488ms): Sep 30 09:46:47 crc kubenswrapper[4970]: Trace[1143486535]: ---"Objects listed" error: 10488ms (09:46:47.879) Sep 30 09:46:47 crc kubenswrapper[4970]: Trace[1143486535]: [10.488789205s] [10.488789205s] END Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.879664 4970 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.881717 4970 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Sep 30 09:46:47 crc kubenswrapper[4970]: E0930 09:46:47.881962 4970 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Sep 30 09:46:47 crc kubenswrapper[4970]: I0930 09:46:47.960260 4970 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.596542 4970 apiserver.go:52] "Watching apiserver" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.624447 4970 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.624634 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.624933 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.625021 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.625150 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.625193 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.625250 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.625194 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.625369 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.625390 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.625540 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.633256 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.633272 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.633256 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.633516 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.633560 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.633703 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.634055 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.636860 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.638053 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.704235 4970 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.755851 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.779028 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.786964 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787017 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787040 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787061 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787081 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787098 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787113 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787127 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787144 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787161 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787175 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787194 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787210 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787226 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787243 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787261 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787277 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787318 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787335 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787324 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787350 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787370 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787385 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787402 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787419 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787435 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787449 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787463 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787479 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787498 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787513 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787530 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787546 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787561 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787576 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787591 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787610 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787651 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787688 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787714 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787735 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787755 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787771 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787788 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787803 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787951 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787976 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788010 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788027 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788045 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788062 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788078 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788095 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788112 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788128 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788147 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788162 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788198 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788214 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788229 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788245 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788262 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788280 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788296 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788313 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788329 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788344 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788360 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788375 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788394 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788413 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788430 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788446 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788464 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788479 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788495 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788511 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788527 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788544 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788560 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788577 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788593 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788609 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788624 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788639 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788654 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788670 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788690 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788705 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788722 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788740 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788756 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788772 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788789 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788805 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788822 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788837 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788853 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788868 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788898 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788914 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788929 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788943 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788961 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788978 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789011 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789027 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789044 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789061 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789078 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789103 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789121 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789137 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789153 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789171 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789187 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789205 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789220 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789237 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789253 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787369 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787525 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787681 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787893 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.787905 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788098 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788253 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788260 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788385 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791174 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788396 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788509 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788529 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788643 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788655 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788701 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788782 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788796 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788894 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788930 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.788970 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789080 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789093 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789164 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789215 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789280 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.789306 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:46:49.289285523 +0000 UTC m=+22.361136457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789325 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789445 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789459 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789674 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789678 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789713 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789814 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.789881 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790018 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790214 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790257 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790360 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790359 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790436 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790452 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790492 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790579 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790603 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790720 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790747 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790749 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790856 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790885 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.790885 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791011 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791292 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791441 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791560 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791674 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791678 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791691 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791672 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791679 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791720 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791754 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791774 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791795 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791822 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791843 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791864 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791899 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791907 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791924 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791952 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.791974 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792018 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792039 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792042 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792046 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792043 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792159 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792191 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792198 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792270 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792277 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792327 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792347 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792355 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792382 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792407 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792431 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792455 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792479 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792515 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792539 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792562 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792587 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792610 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792637 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792659 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794326 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794576 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794611 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794636 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794660 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794687 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794713 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794738 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794767 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794791 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794815 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794840 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794866 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794891 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794917 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794942 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794966 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795008 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795027 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795044 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795062 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795081 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795099 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795192 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795223 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795249 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795302 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795325 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795349 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795376 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795400 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795421 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795438 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795455 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795471 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795489 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795508 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795525 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795542 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795560 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795581 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795599 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795617 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795712 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795744 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795772 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795804 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795835 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795867 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795888 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795936 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796017 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796052 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796108 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796164 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796195 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796250 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796385 4970 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796433 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796451 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796466 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796506 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796524 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796538 4970 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796554 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796572 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796588 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796601 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796615 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796629 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796643 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796657 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796672 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796686 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796701 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796716 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796728 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796741 4970 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796754 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796768 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796777 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796787 4970 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796803 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796812 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796822 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796831 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796843 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796853 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796863 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796873 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796882 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796892 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796903 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796913 4970 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796922 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796932 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796941 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796950 4970 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796961 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796971 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796980 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797007 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797016 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797026 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797035 4970 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797047 4970 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797061 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797076 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797090 4970 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797101 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797111 4970 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797121 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797130 4970 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797140 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797149 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797159 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797169 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797178 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797187 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797196 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797205 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797219 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797231 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797244 4970 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797260 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.799125 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.800745 4970 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.801954 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.802284 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.810006 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.811894 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.817562 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821950 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792478 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792559 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792679 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.792702 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.793213 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.793593 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.793603 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.793940 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.793988 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.793212 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.794503 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795229 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795465 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795525 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795553 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795638 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795696 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795812 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795827 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.795962 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796066 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796147 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796153 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796262 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796300 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796474 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796536 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796546 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796670 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796757 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796878 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.796951 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797045 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797067 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797155 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.797303 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.799503 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.799727 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.800369 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.802101 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.829065 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:49.329037781 +0000 UTC m=+22.400888715 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.802614 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.829298 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.829327 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.829331 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.829456 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.829349 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:49.329336019 +0000 UTC m=+22.401186953 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.802960 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.803273 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.803728 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.804177 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.804841 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.805334 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.805522 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.806011 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.806210 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.807362 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.807604 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.807846 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.808099 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.808175 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.808451 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.808620 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.808919 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.809082 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.809312 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.809407 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.810481 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.811041 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.811464 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.811751 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.811953 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.812467 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.830118 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.831664 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.832562 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.832721 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.833169 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.812534 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.812784 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.813333 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.814923 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.815082 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.815371 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.818014 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.818330 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.818428 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.819252 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.819593 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.819911 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.820479 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.820528 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821046 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821200 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821474 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821642 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821716 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821784 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.821844 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.835557 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.835573 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.822220 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.822230 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.822637 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.822692 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.822779 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.822920 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.823196 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.823488 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.823761 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.824622 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.825711 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.821542 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.802852 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.835745 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.829799 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.837129 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.837148 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.837157 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.838389 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.838560 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.838938 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.839678 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.840193 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:49.340127156 +0000 UTC m=+22.411978090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.840275 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:49.340245089 +0000 UTC m=+22.412096013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.840365 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.841524 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.841609 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.841765 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.841976 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.842049 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.842722 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.843440 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.843655 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.843677 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.843906 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1" exitCode=255 Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.843961 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1"} Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.844050 4970 scope.go:117] "RemoveContainer" containerID="3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.844413 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.844973 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.845784 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.847296 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.848806 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.851891 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.870908 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.881641 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.881674 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.883598 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.897328 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898818 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898864 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898877 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898888 4970 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898898 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898909 4970 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898920 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898930 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898943 4970 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898955 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898964 4970 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898973 4970 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.898985 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899011 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899019 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899028 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899036 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899044 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899056 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899065 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899075 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899086 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899095 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899103 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899111 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899152 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899163 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899175 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899187 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899198 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899210 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899220 4970 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899229 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899238 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899247 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899256 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899265 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899274 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899284 4970 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899293 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899302 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899313 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899325 4970 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899337 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899348 4970 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899357 4970 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899367 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899375 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899383 4970 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899392 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899400 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899409 4970 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899419 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899428 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899436 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899443 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899455 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899463 4970 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899472 4970 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899481 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899489 4970 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899498 4970 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899506 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899513 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899521 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899531 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899540 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899549 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899558 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899566 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899574 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899582 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899590 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899598 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899606 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899615 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899625 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899633 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899643 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899653 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899663 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899671 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899682 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899694 4970 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899709 4970 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899721 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899734 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899746 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899759 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899771 4970 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899782 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899794 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899805 4970 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899816 4970 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899842 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899854 4970 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899865 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899877 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899891 4970 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899902 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899913 4970 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899924 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899933 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899942 4970 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899952 4970 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899963 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.899975 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900008 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900017 4970 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900025 4970 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900034 4970 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900042 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900052 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900060 4970 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900068 4970 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900077 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900086 4970 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900094 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900102 4970 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900128 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900140 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900151 4970 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900160 4970 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900169 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900178 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900186 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900195 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900248 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.900447 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.910470 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.923851 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.936277 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.943191 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.947331 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.952083 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Sep 30 09:46:48 crc kubenswrapper[4970]: W0930 09:46:48.955933 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-94963b1bbbbc8bd90a0fc2f125fc61c7899b09aaf1ecae482cd27171dac674f6 WatchSource:0}: Error finding container 94963b1bbbbc8bd90a0fc2f125fc61c7899b09aaf1ecae482cd27171dac674f6: Status 404 returned error can't find the container with id 94963b1bbbbc8bd90a0fc2f125fc61c7899b09aaf1ecae482cd27171dac674f6 Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.958481 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.963505 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.963736 4970 scope.go:117] "RemoveContainer" containerID="6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1" Sep 30 09:46:48 crc kubenswrapper[4970]: E0930 09:46:48.963920 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.973189 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:48 crc kubenswrapper[4970]: I0930 09:46:48.989374 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Sep 30 09:46:48 crc kubenswrapper[4970]: W0930 09:46:48.997381 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-9c9e9dfdbb344b093298542ac921423c80cea220c70387ad5a04fe22cd22eae8 WatchSource:0}: Error finding container 9c9e9dfdbb344b093298542ac921423c80cea220c70387ad5a04fe22cd22eae8: Status 404 returned error can't find the container with id 9c9e9dfdbb344b093298542ac921423c80cea220c70387ad5a04fe22cd22eae8 Sep 30 09:46:49 crc kubenswrapper[4970]: W0930 09:46:49.001148 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e8c9faf108814975beaf0300a60c95b7d9ae5f5064cfb7776406b794b8faa254 WatchSource:0}: Error finding container e8c9faf108814975beaf0300a60c95b7d9ae5f5064cfb7776406b794b8faa254: Status 404 returned error can't find the container with id e8c9faf108814975beaf0300a60c95b7d9ae5f5064cfb7776406b794b8faa254 Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.198432 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-zd52c"] Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.198743 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.200840 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.201454 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.201784 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.211345 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.228500 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.240443 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.251470 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.263699 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.276148 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.297301 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:41Z\\\",\\\"message\\\":\\\"W0930 09:46:30.863403 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 09:46:30.863812 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759225590 cert, and key in /tmp/serving-cert-1936145661/serving-signer.crt, /tmp/serving-cert-1936145661/serving-signer.key\\\\nI0930 09:46:31.180533 1 observer_polling.go:159] Starting file observer\\\\nW0930 09:46:31.183275 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 09:46:31.183507 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:31.186435 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936145661/tls.crt::/tmp/serving-cert-1936145661/tls.key\\\\\\\"\\\\nF0930 09:46:41.714585 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.302202 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.302256 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nm5x\" (UniqueName: \"kubernetes.io/projected/6f1f7645-8157-4743-a7cc-0083a3269987-kube-api-access-6nm5x\") pod \"node-resolver-zd52c\" (UID: \"6f1f7645-8157-4743-a7cc-0083a3269987\") " pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.302325 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6f1f7645-8157-4743-a7cc-0083a3269987-hosts-file\") pod \"node-resolver-zd52c\" (UID: \"6f1f7645-8157-4743-a7cc-0083a3269987\") " pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.302509 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:46:50.302469513 +0000 UTC m=+23.374320447 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.306142 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.402916 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.402963 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.403009 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.403032 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6f1f7645-8157-4743-a7cc-0083a3269987-hosts-file\") pod \"node-resolver-zd52c\" (UID: \"6f1f7645-8157-4743-a7cc-0083a3269987\") " pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.403062 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nm5x\" (UniqueName: \"kubernetes.io/projected/6f1f7645-8157-4743-a7cc-0083a3269987-kube-api-access-6nm5x\") pod \"node-resolver-zd52c\" (UID: \"6f1f7645-8157-4743-a7cc-0083a3269987\") " pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.403084 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403162 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403218 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:50.403202403 +0000 UTC m=+23.475053347 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.403233 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/6f1f7645-8157-4743-a7cc-0083a3269987-hosts-file\") pod \"node-resolver-zd52c\" (UID: \"6f1f7645-8157-4743-a7cc-0083a3269987\") " pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403259 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403347 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403371 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403465 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:50.403428949 +0000 UTC m=+23.475279923 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403265 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403530 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403562 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403611 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:50.403596463 +0000 UTC m=+23.475447437 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403727 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.403789 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:50.403771738 +0000 UTC m=+23.475622712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.421565 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nm5x\" (UniqueName: \"kubernetes.io/projected/6f1f7645-8157-4743-a7cc-0083a3269987-kube-api-access-6nm5x\") pod \"node-resolver-zd52c\" (UID: \"6f1f7645-8157-4743-a7cc-0083a3269987\") " pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.511882 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zd52c" Sep 30 09:46:49 crc kubenswrapper[4970]: W0930 09:46:49.530419 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f1f7645_8157_4743_a7cc_0083a3269987.slice/crio-3b39213c350cd7403366ab0941c450898b41e3ba1c265446fc0b60436552bc4b WatchSource:0}: Error finding container 3b39213c350cd7403366ab0941c450898b41e3ba1c265446fc0b60436552bc4b: Status 404 returned error can't find the container with id 3b39213c350cd7403366ab0941c450898b41e3ba1c265446fc0b60436552bc4b Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.576041 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-d6567"] Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.577467 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.581527 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-gcphg"] Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.582053 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wdlzl"] Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.582321 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-frblw"] Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.582527 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.582619 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.583061 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.591925 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.593010 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.593720 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.594174 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.594256 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.594482 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.594491 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.594489 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.595259 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.595275 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.600297 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.600708 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.600830 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.600942 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.601226 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.601303 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.601322 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.601521 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.601575 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.607026 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:41Z\\\",\\\"message\\\":\\\"W0930 09:46:30.863403 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 09:46:30.863812 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759225590 cert, and key in /tmp/serving-cert-1936145661/serving-signer.crt, /tmp/serving-cert-1936145661/serving-signer.key\\\\nI0930 09:46:31.180533 1 observer_polling.go:159] Starting file observer\\\\nW0930 09:46:31.183275 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 09:46:31.183507 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:31.186435 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936145661/tls.crt::/tmp/serving-cert-1936145661/tls.key\\\\\\\"\\\\nF0930 09:46:41.714585 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.617199 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.631959 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.647850 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.658316 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.670772 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.675823 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.676708 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.678278 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.679193 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.680596 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.681403 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.682279 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.683667 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.684065 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.684594 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.686312 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.687150 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.688745 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.689558 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.690390 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.691865 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.692622 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.694084 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.694671 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.695509 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.696555 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.697063 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.697862 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.699199 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.699770 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.701186 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.702052 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.703145 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.704683 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705337 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/69776d3e-4ddb-484b-86dd-930de13b3523-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705367 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705382 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-etc-kubernetes\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92198682-93fe-4b8a-8b03-bb768b56a129-proxy-tls\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705435 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705461 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92198682-93fe-4b8a-8b03-bb768b56a129-mcd-auth-proxy-config\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705484 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705503 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-cni-bin\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/adc4e528-ad76-4673-925a-f4f932e1ac51-cni-binary-copy\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705569 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-systemd\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705587 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-netd\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705622 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-config\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705642 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qpww\" (UniqueName: \"kubernetes.io/projected/9687ea64-3693-468d-9fde-2059deb10338-kube-api-access-2qpww\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705660 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-k8s-cni-cncf-io\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705680 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bft59\" (UniqueName: \"kubernetes.io/projected/92198682-93fe-4b8a-8b03-bb768b56a129-kube-api-access-bft59\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705701 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9687ea64-3693-468d-9fde-2059deb10338-ovn-node-metrics-cert\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705721 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-os-release\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705741 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-os-release\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705776 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-cni-multus\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705799 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-node-log\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705817 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-bin\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705836 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjk6\" (UniqueName: \"kubernetes.io/projected/69776d3e-4ddb-484b-86dd-930de13b3523-kube-api-access-7vjk6\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705856 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-netns\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705888 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-cni-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705907 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-daemon-config\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705926 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-multus-certs\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705950 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-slash\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.705975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-cnibin\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706043 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-kubelet\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-ovn-kubernetes\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706118 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-socket-dir-parent\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706136 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtjh5\" (UniqueName: \"kubernetes.io/projected/adc4e528-ad76-4673-925a-f4f932e1ac51-kube-api-access-dtjh5\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706168 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/92198682-93fe-4b8a-8b03-bb768b56a129-rootfs\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706187 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-system-cni-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706208 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-kubelet\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706290 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-script-lib\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706336 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-var-lib-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706356 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-hostroot\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706385 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-systemd-units\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706407 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-log-socket\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706446 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-netns\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-system-cni-dir\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706496 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69776d3e-4ddb-484b-86dd-930de13b3523-cni-binary-copy\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706520 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706540 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-conf-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706611 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-etc-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706663 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-ovn\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-env-overrides\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706715 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-cnibin\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.706732 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.707465 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.707745 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.708655 4970 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.708795 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.711159 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.712350 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.713022 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.715019 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.716155 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.716842 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.718311 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.719060 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.719229 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.720398 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.721448 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.722872 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.724479 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.725354 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.726193 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.727479 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.728639 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.729846 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.730090 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.730589 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.731249 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.732478 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.733275 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.734607 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.740551 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.752028 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.768623 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.785109 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809183 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-k8s-cni-cncf-io\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809272 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-systemd\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-k8s-cni-cncf-io\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809307 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-netd\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809363 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-config\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809372 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-systemd\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809415 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qpww\" (UniqueName: \"kubernetes.io/projected/9687ea64-3693-468d-9fde-2059deb10338-kube-api-access-2qpww\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bft59\" (UniqueName: \"kubernetes.io/projected/92198682-93fe-4b8a-8b03-bb768b56a129-kube-api-access-bft59\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809535 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9687ea64-3693-468d-9fde-2059deb10338-ovn-node-metrics-cert\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809585 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-os-release\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809621 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-os-release\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809763 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-netns\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809820 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-cni-multus\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809566 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-netd\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809854 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-node-log\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809915 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-node-log\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.809962 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-bin\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810019 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vjk6\" (UniqueName: \"kubernetes.io/projected/69776d3e-4ddb-484b-86dd-930de13b3523-kube-api-access-7vjk6\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810043 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-multus-certs\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810082 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-cni-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810098 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-daemon-config\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810116 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-kubelet\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810135 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-slash\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810159 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-cnibin\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810175 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-ovn-kubernetes\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810192 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-socket-dir-parent\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810208 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtjh5\" (UniqueName: \"kubernetes.io/projected/adc4e528-ad76-4673-925a-f4f932e1ac51-kube-api-access-dtjh5\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810211 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-netns\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810262 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/92198682-93fe-4b8a-8b03-bb768b56a129-rootfs\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810235 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/92198682-93fe-4b8a-8b03-bb768b56a129-rootfs\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810298 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-os-release\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810321 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-system-cni-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810344 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-bin\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810351 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-config\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-kubelet\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810372 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-cni-multus\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810398 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-kubelet\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810402 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-script-lib\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810441 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-slash\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810457 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-var-lib-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810492 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-hostroot\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810522 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-cnibin\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810542 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-systemd-units\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810554 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-ovn-kubernetes\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810574 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-log-socket\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810605 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-netns\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-system-cni-dir\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810681 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69776d3e-4ddb-484b-86dd-930de13b3523-cni-binary-copy\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810707 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-cnibin\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810726 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-socket-dir-parent\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810739 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810767 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-conf-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810813 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-etc-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810839 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-ovn\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810863 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-env-overrides\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810870 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-cni-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810897 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-kubelet\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810327 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-os-release\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810895 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810933 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/69776d3e-4ddb-484b-86dd-930de13b3523-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810490 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-run-multus-certs\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811008 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-cnibin\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811231 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-conf-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811279 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-hostroot\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811310 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-var-lib-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811340 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-log-socket\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811413 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-systemd-units\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.810865 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-script-lib\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811498 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-system-cni-dir\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811518 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-netns\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811564 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-etc-kubernetes\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811574 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/69776d3e-4ddb-484b-86dd-930de13b3523-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811573 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-ovn\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811605 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-etc-kubernetes\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811651 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-etc-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811717 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-system-cni-dir\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811765 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92198682-93fe-4b8a-8b03-bb768b56a129-proxy-tls\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811790 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92198682-93fe-4b8a-8b03-bb768b56a129-mcd-auth-proxy-config\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811815 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811836 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-cni-bin\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811861 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/adc4e528-ad76-4673-925a-f4f932e1ac51-host-var-lib-cni-bin\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.811900 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/adc4e528-ad76-4673-925a-f4f932e1ac51-cni-binary-copy\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.812097 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-env-overrides\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.812101 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-openvswitch\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.812501 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/69776d3e-4ddb-484b-86dd-930de13b3523-cni-binary-copy\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.812558 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/92198682-93fe-4b8a-8b03-bb768b56a129-mcd-auth-proxy-config\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.812666 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/adc4e528-ad76-4673-925a-f4f932e1ac51-cni-binary-copy\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.813337 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/adc4e528-ad76-4673-925a-f4f932e1ac51-multus-daemon-config\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.814155 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9687ea64-3693-468d-9fde-2059deb10338-ovn-node-metrics-cert\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.816095 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/92198682-93fe-4b8a-8b03-bb768b56a129-proxy-tls\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.833101 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/69776d3e-4ddb-484b-86dd-930de13b3523-tuning-conf-dir\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.863468 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.869154 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtjh5\" (UniqueName: \"kubernetes.io/projected/adc4e528-ad76-4673-925a-f4f932e1ac51-kube-api-access-dtjh5\") pod \"multus-wdlzl\" (UID: \"adc4e528-ad76-4673-925a-f4f932e1ac51\") " pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.873716 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bft59\" (UniqueName: \"kubernetes.io/projected/92198682-93fe-4b8a-8b03-bb768b56a129-kube-api-access-bft59\") pod \"machine-config-daemon-gcphg\" (UID: \"92198682-93fe-4b8a-8b03-bb768b56a129\") " pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.879639 4970 scope.go:117] "RemoveContainer" containerID="6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1" Sep 30 09:46:49 crc kubenswrapper[4970]: E0930 09:46:49.879827 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.886725 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vjk6\" (UniqueName: \"kubernetes.io/projected/69776d3e-4ddb-484b-86dd-930de13b3523-kube-api-access-7vjk6\") pod \"multus-additional-cni-plugins-d6567\" (UID: \"69776d3e-4ddb-484b-86dd-930de13b3523\") " pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.887712 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3f4b3468b7a72c61779b1ed803e67a50f268f5ca127996fd8677961b84d13739\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:41Z\\\",\\\"message\\\":\\\"W0930 09:46:30.863403 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0930 09:46:30.863812 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759225590 cert, and key in /tmp/serving-cert-1936145661/serving-signer.crt, /tmp/serving-cert-1936145661/serving-signer.key\\\\nI0930 09:46:31.180533 1 observer_polling.go:159] Starting file observer\\\\nW0930 09:46:31.183275 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0930 09:46:31.183507 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:31.186435 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1936145661/tls.crt::/tmp/serving-cert-1936145661/tls.key\\\\\\\"\\\\nF0930 09:46:41.714585 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.894593 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qpww\" (UniqueName: \"kubernetes.io/projected/9687ea64-3693-468d-9fde-2059deb10338-kube-api-access-2qpww\") pod \"ovnkube-node-frblw\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.899593 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.899757 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.899772 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e8c9faf108814975beaf0300a60c95b7d9ae5f5064cfb7776406b794b8faa254"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.903538 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.903614 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9c9e9dfdbb344b093298542ac921423c80cea220c70387ad5a04fe22cd22eae8"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.904512 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.906220 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-d6567" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.910205 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wdlzl" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.919296 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.920967 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"94963b1bbbbc8bd90a0fc2f125fc61c7899b09aaf1ecae482cd27171dac674f6"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.934328 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.934842 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.941407 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zd52c" event={"ID":"6f1f7645-8157-4743-a7cc-0083a3269987","Type":"ContainerStarted","Data":"3b39213c350cd7403366ab0941c450898b41e3ba1c265446fc0b60436552bc4b"} Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.950874 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.963650 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:49 crc kubenswrapper[4970]: I0930 09:46:49.991702 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.000239 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.019455 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.030844 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.047342 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.057932 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.070660 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.084684 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.096453 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.104710 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.116879 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.128108 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.143184 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.324223 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.324422 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:46:52.324394165 +0000 UTC m=+25.396245099 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.425078 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.425132 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.425159 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.425185 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425293 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425344 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425365 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425378 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425383 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:52.425357101 +0000 UTC m=+25.497208035 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425406 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425450 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:52.425430413 +0000 UTC m=+25.497281347 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425462 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425408 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425484 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425503 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:52.425496564 +0000 UTC m=+25.497347498 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.425556 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:52.425534075 +0000 UTC m=+25.497385209 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.668251 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.668314 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.668729 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.668855 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.668383 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:50 crc kubenswrapper[4970]: E0930 09:46:50.669171 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.947494 4970 generic.go:334] "Generic (PLEG): container finished" podID="69776d3e-4ddb-484b-86dd-930de13b3523" containerID="816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95" exitCode=0 Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.947569 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerDied","Data":"816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.947606 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerStarted","Data":"bfe0f716c08d7e85f3c88d50173bb72d42a608d3614ca9ac7d170fdceb294b9e"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.950160 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" exitCode=0 Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.950284 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.950368 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"b0647c809e6f211d5390100dba68c022765b168080a9a385c63462c8d822693b"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.954866 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zd52c" event={"ID":"6f1f7645-8157-4743-a7cc-0083a3269987","Type":"ContainerStarted","Data":"ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.957138 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.957213 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.957229 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"f1c909f4876634440b66a91edaa52274fb02c4a0b57c51951540b63cb3d16775"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.958587 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerStarted","Data":"6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.958618 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerStarted","Data":"d65ff30fbb1e1d85ca50134fe44192f8c360d9fc596f8c18579d56687138b2a1"} Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.965069 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:50Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.984750 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:50Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:50 crc kubenswrapper[4970]: I0930 09:46:50.998968 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:50Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.019384 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.034489 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.049433 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.065757 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.082932 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.099345 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.119532 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.135773 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.151504 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.165232 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.182630 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.197262 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.214115 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.229936 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.249466 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.267598 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.288868 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.305403 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.323940 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.339542 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.353037 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.560879 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.561880 4970 scope.go:117] "RemoveContainer" containerID="6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1" Sep 30 09:46:51 crc kubenswrapper[4970]: E0930 09:46:51.562219 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.970872 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.970920 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.970935 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.970946 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.970956 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.970965 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.972769 4970 generic.go:334] "Generic (PLEG): container finished" podID="69776d3e-4ddb-484b-86dd-930de13b3523" containerID="30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6" exitCode=0 Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.972825 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerDied","Data":"30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6"} Sep 30 09:46:51 crc kubenswrapper[4970]: I0930 09:46:51.989125 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:51Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.003153 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.024305 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.042694 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.061423 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.080564 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.093371 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.106874 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.122341 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.136223 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.148852 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.161571 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.178728 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-4t4nh"] Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.179046 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.180386 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.180576 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.180689 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.181148 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.201003 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.212080 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.228547 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.244437 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.258847 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.273774 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.289951 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.309459 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.326710 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.339648 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.344671 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.344855 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkn7\" (UniqueName: \"kubernetes.io/projected/2eedcbd8-2867-4c1d-8d23-25e76843cca8-kube-api-access-qpkn7\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.344904 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:46:56.344876536 +0000 UTC m=+29.416727470 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.345053 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2eedcbd8-2867-4c1d-8d23-25e76843cca8-host\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.345113 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2eedcbd8-2867-4c1d-8d23-25e76843cca8-serviceca\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.350499 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.360302 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.371965 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.445724 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.445850 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446163 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:56.446129599 +0000 UTC m=+29.517980573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446049 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446303 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446322 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2eedcbd8-2867-4c1d-8d23-25e76843cca8-host\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446330 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446341 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2eedcbd8-2867-4c1d-8d23-25e76843cca8-serviceca\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446344 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446365 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446415 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:56.446380686 +0000 UTC m=+29.518231610 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446445 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpkn7\" (UniqueName: \"kubernetes.io/projected/2eedcbd8-2867-4c1d-8d23-25e76843cca8-kube-api-access-qpkn7\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446461 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446477 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446534 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:56.44651752 +0000 UTC m=+29.518368454 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446541 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446564 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446575 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.446599 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:46:56.446592442 +0000 UTC m=+29.518443376 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.446631 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2eedcbd8-2867-4c1d-8d23-25e76843cca8-host\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.448424 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2eedcbd8-2867-4c1d-8d23-25e76843cca8-serviceca\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.465692 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpkn7\" (UniqueName: \"kubernetes.io/projected/2eedcbd8-2867-4c1d-8d23-25e76843cca8-kube-api-access-qpkn7\") pod \"node-ca-4t4nh\" (UID: \"2eedcbd8-2867-4c1d-8d23-25e76843cca8\") " pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.667627 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.667685 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.667653 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.667768 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.667868 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:46:52 crc kubenswrapper[4970]: E0930 09:46:52.667945 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.743720 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-4t4nh" Sep 30 09:46:52 crc kubenswrapper[4970]: W0930 09:46:52.764071 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eedcbd8_2867_4c1d_8d23_25e76843cca8.slice/crio-e693873a14a140581d6152c4f82f7d61def26991a4260bcc67eaba9eedf1e895 WatchSource:0}: Error finding container e693873a14a140581d6152c4f82f7d61def26991a4260bcc67eaba9eedf1e895: Status 404 returned error can't find the container with id e693873a14a140581d6152c4f82f7d61def26991a4260bcc67eaba9eedf1e895 Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.979709 4970 generic.go:334] "Generic (PLEG): container finished" podID="69776d3e-4ddb-484b-86dd-930de13b3523" containerID="4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50" exitCode=0 Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.979752 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerDied","Data":"4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50"} Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.981959 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2"} Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.983635 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4t4nh" event={"ID":"2eedcbd8-2867-4c1d-8d23-25e76843cca8","Type":"ContainerStarted","Data":"e693873a14a140581d6152c4f82f7d61def26991a4260bcc67eaba9eedf1e895"} Sep 30 09:46:52 crc kubenswrapper[4970]: I0930 09:46:52.995405 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:52Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.007628 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.020765 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.036472 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.055051 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.069511 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.091787 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.106552 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.126221 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.144207 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.158258 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.170025 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.183461 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.202097 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.215743 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.233684 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.249059 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.261473 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.274069 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.287816 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.302714 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.316020 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.327138 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.338708 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.349184 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.359785 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:53Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.989890 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.990744 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-4t4nh" event={"ID":"2eedcbd8-2867-4c1d-8d23-25e76843cca8","Type":"ContainerStarted","Data":"3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399"} Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.992496 4970 generic.go:334] "Generic (PLEG): container finished" podID="69776d3e-4ddb-484b-86dd-930de13b3523" containerID="caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1" exitCode=0 Sep 30 09:46:53 crc kubenswrapper[4970]: I0930 09:46:53.992561 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerDied","Data":"caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.007462 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.019059 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.030442 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.044566 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.063158 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.081051 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.098698 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.108974 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.124709 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.139844 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.154183 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.164886 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.176641 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.186327 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.195353 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.208622 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.221265 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.234427 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.249176 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.266149 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.280380 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.282153 4970 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.285925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.285976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.286003 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.286362 4970 kubelet_node_status.go:76] "Attempting to register node" node="crc" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.295812 4970 kubelet_node_status.go:115] "Node was previously registered" node="crc" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.296066 4970 kubelet_node_status.go:79] "Successfully registered node" node="crc" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.297071 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.297110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.297122 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.297138 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.297151 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.300564 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.314545 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.316924 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.321303 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.321337 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.321351 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.321371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.321384 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.329103 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.334479 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.337619 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.337647 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.337655 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.337668 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.337677 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.339977 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.350159 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.352064 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.353906 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.353953 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.353965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.353999 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.354013 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.367238 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.371776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.371805 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.371814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.371829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.371842 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.390577 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.390742 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.393061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.393109 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.393121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.393141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.393156 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.495426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.495470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.495479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.495496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.495509 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.599096 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.599150 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.599164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.599188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.599201 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.667822 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.667896 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.667969 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.667832 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.668136 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:46:54 crc kubenswrapper[4970]: E0930 09:46:54.668268 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.701703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.701732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.701740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.701754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.701766 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.726913 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.731058 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.735412 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.741186 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.751298 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.795282 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.804743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.804779 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.804790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.804808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.804819 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.825542 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.854155 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.868130 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.884700 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.901647 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.907197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.907243 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.907256 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.907276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.907290 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:54Z","lastTransitionTime":"2025-09-30T09:46:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.919321 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.939868 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.954041 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.967215 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.982308 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.995932 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:54Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:54 crc kubenswrapper[4970]: I0930 09:46:54.999870 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerStarted","Data":"1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.008956 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.009004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.009015 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.009033 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.009043 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.011395 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.026848 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.042669 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.055146 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.070294 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.086042 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.101712 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.111676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.111745 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.111756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.111776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.111788 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.126653 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.143295 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.163524 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.187521 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.204293 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.215477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.215524 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.215534 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.215548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.215558 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.223545 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.238616 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.251562 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.267694 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.281469 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.300434 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.317895 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.317937 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.317948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.317965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.317975 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.318251 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.336284 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.349681 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.367821 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.385172 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.401720 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.413410 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.420566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.420612 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.420621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.420636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.420645 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.440050 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.478335 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:55Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.523976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.524075 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.524094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.524119 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.524189 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.626861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.626909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.626928 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.626954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.626983 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.730526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.730588 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.730605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.730630 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.730647 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.834228 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.834300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.834324 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.834355 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.834376 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.938130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.938203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.938221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.938245 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:55 crc kubenswrapper[4970]: I0930 09:46:55.938261 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:55Z","lastTransitionTime":"2025-09-30T09:46:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.014903 4970 generic.go:334] "Generic (PLEG): container finished" podID="69776d3e-4ddb-484b-86dd-930de13b3523" containerID="1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b" exitCode=0 Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.015038 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerDied","Data":"1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.035123 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.041540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.041612 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.041635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.041669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.041692 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.055661 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.070291 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.089433 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.102960 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.126283 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.138860 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.144290 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.144343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.144356 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.144380 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.144391 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.149857 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.163565 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.177267 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.194125 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.209264 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.227298 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.242379 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.246938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.246973 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.247028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.247055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.247071 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.349915 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.349956 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.350011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.350030 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.350040 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.395208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.395417 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:47:04.395392413 +0000 UTC m=+37.467243347 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.452928 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.452984 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.453019 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.453040 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.453053 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.496036 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.496104 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.496146 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.496191 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496238 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496317 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:04.496296717 +0000 UTC m=+37.568147651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496350 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496364 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496379 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496385 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496498 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:04.496467982 +0000 UTC m=+37.568318986 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496394 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496393 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496647 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496655 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:04.496600235 +0000 UTC m=+37.568451209 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.496686 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:04.496678907 +0000 UTC m=+37.568529832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.556718 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.556756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.556767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.556783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.556794 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.659401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.659461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.659480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.659504 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.659522 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.667788 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.667832 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.667802 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.667963 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.668125 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:46:56 crc kubenswrapper[4970]: E0930 09:46:56.668230 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.763043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.763110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.763129 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.763155 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.763173 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.866737 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.866786 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.866797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.866813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.866824 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.970181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.970230 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.970250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.970277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:56 crc kubenswrapper[4970]: I0930 09:46:56.970295 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:56Z","lastTransitionTime":"2025-09-30T09:46:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.025225 4970 generic.go:334] "Generic (PLEG): container finished" podID="69776d3e-4ddb-484b-86dd-930de13b3523" containerID="ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4" exitCode=0 Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.025308 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerDied","Data":"ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.037817 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.039666 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.039783 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.059872 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.074073 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.074105 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.074116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.074132 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.074145 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.074522 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.075250 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.078985 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.093676 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.106804 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.121141 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.132221 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.147709 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.160729 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.174821 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.176705 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.176758 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.176772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.176793 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.176811 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.187168 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.201226 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.218974 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.230126 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.241294 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.254597 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.270564 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.279754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.279794 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.279806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.279823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.279836 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.285102 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.300604 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.313089 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.322860 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.333180 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.353125 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.367215 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.382727 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.382756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.382766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.382779 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.382790 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.389402 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.407958 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.421273 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.438098 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.451963 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.485643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.485691 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.485705 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.485723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.485736 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.589403 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.589476 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.589497 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.589527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.589545 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.691041 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.692363 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.692414 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.692431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.692458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.692475 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.711831 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.733276 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.755185 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.775007 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.798097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.798179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.798200 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.798228 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.798257 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.804219 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.821446 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.839471 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.855721 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.872593 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.886381 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.904619 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.904701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.904716 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.904740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.904756 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:57Z","lastTransitionTime":"2025-09-30T09:46:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.912231 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.928036 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:57 crc kubenswrapper[4970]: I0930 09:46:57.949538 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.008270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.008329 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.008340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.008360 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.008371 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.046828 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" event={"ID":"69776d3e-4ddb-484b-86dd-930de13b3523","Type":"ContainerStarted","Data":"7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.046966 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.064188 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.078945 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.091796 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.107097 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.111324 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.111454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.111529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.111639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.111712 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.123777 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.142032 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.164317 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.184920 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.205042 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.214712 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.214778 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.214797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.214823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.214842 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.222924 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.248424 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.266229 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.297069 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.318221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.318276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.318290 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.318309 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.318324 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.320269 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:46:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.420772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.420830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.420846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.420869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.420887 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.524050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.524116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.524133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.524158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.524174 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.627196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.627264 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.627286 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.627318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.627345 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.667628 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.667666 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.667737 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:46:58 crc kubenswrapper[4970]: E0930 09:46:58.667835 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:46:58 crc kubenswrapper[4970]: E0930 09:46:58.668051 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:46:58 crc kubenswrapper[4970]: E0930 09:46:58.668157 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.730266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.730331 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.730345 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.730360 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.730370 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.833339 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.833379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.833391 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.833407 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.833418 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.936065 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.936106 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.936118 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.936133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:58 crc kubenswrapper[4970]: I0930 09:46:58.936146 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:58Z","lastTransitionTime":"2025-09-30T09:46:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.037832 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.037873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.037885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.037900 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.037910 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.048980 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.141306 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.141381 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.141402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.141434 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.141498 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.244418 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.244490 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.244514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.244542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.244560 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.347266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.347326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.347343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.347367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.347384 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.452088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.452150 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.452164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.452182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.452196 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.555040 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.555082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.555095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.555114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.555125 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.657651 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.657689 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.657697 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.657711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.657719 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.759747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.759801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.759811 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.759832 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.759847 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.862185 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.862459 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.862568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.862685 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.862798 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.965507 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.965550 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.965561 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.965577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:46:59 crc kubenswrapper[4970]: I0930 09:46:59.965590 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:46:59Z","lastTransitionTime":"2025-09-30T09:46:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.067982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.068083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.068102 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.068125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.068141 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.170353 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.170403 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.170415 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.170433 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.170449 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.272957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.273037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.273054 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.273104 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.273120 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.374757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.374790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.374798 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.374814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.374822 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.478214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.478272 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.478284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.478304 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.478318 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.581465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.581516 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.581534 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.581559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.581573 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.667878 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.667909 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.667948 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:00 crc kubenswrapper[4970]: E0930 09:47:00.668091 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:00 crc kubenswrapper[4970]: E0930 09:47:00.668242 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:00 crc kubenswrapper[4970]: E0930 09:47:00.668452 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.684072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.684123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.684142 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.684165 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.684192 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.786361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.786415 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.786432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.786455 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.786473 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.889674 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.889746 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.889768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.889796 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.889818 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.992471 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.992519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.992537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.992560 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:00 crc kubenswrapper[4970]: I0930 09:47:00.992580 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:00Z","lastTransitionTime":"2025-09-30T09:47:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.059236 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/0.log" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.063315 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4" exitCode=1 Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.063393 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.066461 4970 scope.go:117] "RemoveContainer" containerID="4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.087871 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.096171 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.096241 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.096262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.096290 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.096312 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.107973 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.125570 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.142449 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.167366 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.187034 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.199231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.199274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.199285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.199305 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.199317 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.214564 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:00Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.334524 6227 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 09:47:00.334217 6227 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 09:47:00.334681 6227 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 09:47:00.334706 6227 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 09:47:00.334790 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 09:47:00.334828 6227 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 09:47:00.334872 6227 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 09:47:00.334895 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0930 09:47:00.335031 6227 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 09:47:00.335188 6227 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.335352 6227 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0930 09:47:00.335424 6227 factory.go:656] Stopping watch factory\\\\nI0930 09:47:00.335444 6227 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.234876 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.250749 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.271572 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.290488 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.301562 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.301596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.301607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.301623 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.301632 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.305334 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.322800 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.339592 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.404748 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.404808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.404825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.404849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.404866 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.508404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.508461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.508479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.508502 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.508519 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.645488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.645530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.645541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.645556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.645570 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.666384 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb"] Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.666970 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.670179 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.670327 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.688960 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.703625 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.722748 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.743813 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.747453 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.747487 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.747502 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.747522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.747539 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.760516 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eba65cf9-c598-46fa-a09b-23cb90b9575f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.760608 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eba65cf9-c598-46fa-a09b-23cb90b9575f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.760783 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68gs\" (UniqueName: \"kubernetes.io/projected/eba65cf9-c598-46fa-a09b-23cb90b9575f-kube-api-access-l68gs\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.760975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eba65cf9-c598-46fa-a09b-23cb90b9575f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.763737 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.776385 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.800435 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.817423 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.846060 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:00Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.334524 6227 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 09:47:00.334217 6227 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 09:47:00.334681 6227 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 09:47:00.334706 6227 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 09:47:00.334790 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 09:47:00.334828 6227 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 09:47:00.334872 6227 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 09:47:00.334895 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0930 09:47:00.335031 6227 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 09:47:00.335188 6227 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.335352 6227 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0930 09:47:00.335424 6227 factory.go:656] Stopping watch factory\\\\nI0930 09:47:00.335444 6227 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.849575 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.849604 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.849615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.849631 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.849642 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.861411 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.861730 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eba65cf9-c598-46fa-a09b-23cb90b9575f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.861804 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eba65cf9-c598-46fa-a09b-23cb90b9575f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.861878 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eba65cf9-c598-46fa-a09b-23cb90b9575f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.861927 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68gs\" (UniqueName: \"kubernetes.io/projected/eba65cf9-c598-46fa-a09b-23cb90b9575f-kube-api-access-l68gs\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.862644 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/eba65cf9-c598-46fa-a09b-23cb90b9575f-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.863130 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/eba65cf9-c598-46fa-a09b-23cb90b9575f-env-overrides\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.869338 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/eba65cf9-c598-46fa-a09b-23cb90b9575f-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.883315 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.883475 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68gs\" (UniqueName: \"kubernetes.io/projected/eba65cf9-c598-46fa-a09b-23cb90b9575f-kube-api-access-l68gs\") pod \"ovnkube-control-plane-749d76644c-6sjvb\" (UID: \"eba65cf9-c598-46fa-a09b-23cb90b9575f\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.895234 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.910124 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.927075 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.943724 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:01Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.951981 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.952048 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.952058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.952076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.952089 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:01Z","lastTransitionTime":"2025-09-30T09:47:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:01 crc kubenswrapper[4970]: I0930 09:47:01.981615 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" Sep 30 09:47:01 crc kubenswrapper[4970]: W0930 09:47:01.995890 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeba65cf9_c598_46fa_a09b_23cb90b9575f.slice/crio-b6e17ef39bc7fad473fb1634459e82e8058831ef652222f8e02d15cd044d2e24 WatchSource:0}: Error finding container b6e17ef39bc7fad473fb1634459e82e8058831ef652222f8e02d15cd044d2e24: Status 404 returned error can't find the container with id b6e17ef39bc7fad473fb1634459e82e8058831ef652222f8e02d15cd044d2e24 Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.055110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.055137 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.055145 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.055158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.055167 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.068609 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/0.log" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.078198 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.078614 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.083365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" event={"ID":"eba65cf9-c598-46fa-a09b-23cb90b9575f","Type":"ContainerStarted","Data":"b6e17ef39bc7fad473fb1634459e82e8058831ef652222f8e02d15cd044d2e24"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.094755 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.110956 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.126838 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.141053 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.156499 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.160273 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.160309 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.160320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.160338 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.160352 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.172429 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.186376 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.200825 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.217105 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.230526 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.252948 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:00Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.334524 6227 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 09:47:00.334217 6227 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 09:47:00.334681 6227 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 09:47:00.334706 6227 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 09:47:00.334790 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 09:47:00.334828 6227 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 09:47:00.334872 6227 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 09:47:00.334895 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0930 09:47:00.335031 6227 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 09:47:00.335188 6227 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.335352 6227 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0930 09:47:00.335424 6227 factory.go:656] Stopping watch factory\\\\nI0930 09:47:00.335444 6227 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.262833 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.262873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.262884 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.262903 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.262915 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.272781 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.290272 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.311907 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.333716 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:02Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.365194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.365227 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.365237 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.365251 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.365260 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.467152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.467201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.467216 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.467236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.467249 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.569545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.569576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.569584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.569597 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.569606 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.693437 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.693497 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:02 crc kubenswrapper[4970]: E0930 09:47:02.693563 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.693636 4970 scope.go:117] "RemoveContainer" containerID="6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.693651 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:02 crc kubenswrapper[4970]: E0930 09:47:02.693692 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:02 crc kubenswrapper[4970]: E0930 09:47:02.693790 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.695081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.695120 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.695132 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.695147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.695158 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.798822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.798856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.798866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.798879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.798890 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.900532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.900558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.900565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.900577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:02 crc kubenswrapper[4970]: I0930 09:47:02.900586 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:02Z","lastTransitionTime":"2025-09-30T09:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.003082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.003128 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.003140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.003157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.003168 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.089411 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/1.log" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.090241 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/0.log" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.093841 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88" exitCode=1 Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.093956 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.094082 4970 scope.go:117] "RemoveContainer" containerID="4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.095445 4970 scope.go:117] "RemoveContainer" containerID="36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88" Sep 30 09:47:03 crc kubenswrapper[4970]: E0930 09:47:03.095743 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.096419 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" event={"ID":"eba65cf9-c598-46fa-a09b-23cb90b9575f","Type":"ContainerStarted","Data":"a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.096470 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" event={"ID":"eba65cf9-c598-46fa-a09b-23cb90b9575f","Type":"ContainerStarted","Data":"87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.099307 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.101211 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.102297 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.113252 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.113355 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.113374 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.113399 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.113418 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.122058 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.145246 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.171772 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.189046 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.207275 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.216590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.216636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.216647 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.216664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.216676 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.225609 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.237771 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.247964 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.264526 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:00Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.334524 6227 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 09:47:00.334217 6227 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 09:47:00.334681 6227 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 09:47:00.334706 6227 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 09:47:00.334790 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 09:47:00.334828 6227 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 09:47:00.334872 6227 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 09:47:00.334895 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0930 09:47:00.335031 6227 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 09:47:00.335188 6227 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.335352 6227 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0930 09:47:00.335424 6227 factory.go:656] Stopping watch factory\\\\nI0930 09:47:00.335444 6227 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.276081 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.285808 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.295350 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.313795 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.319495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.319526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.319535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.319553 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.319566 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.328401 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.344648 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.362935 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.378778 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.398333 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.418136 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.423091 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.423131 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.423143 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.423160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.423171 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.437499 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.451321 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.474199 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.486586 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.499195 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.509188 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-sgksk"] Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.509898 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:03 crc kubenswrapper[4970]: E0930 09:47:03.510021 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.519719 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.525325 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.525477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.525500 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.525524 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.525539 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.539493 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.566532 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:00Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.334524 6227 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 09:47:00.334217 6227 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 09:47:00.334681 6227 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 09:47:00.334706 6227 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 09:47:00.334790 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 09:47:00.334828 6227 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 09:47:00.334872 6227 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 09:47:00.334895 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0930 09:47:00.335031 6227 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 09:47:00.335188 6227 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.335352 6227 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0930 09:47:00.335424 6227 factory.go:656] Stopping watch factory\\\\nI0930 09:47:00.335444 6227 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.578117 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.578309 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjxpn\" (UniqueName: \"kubernetes.io/projected/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-kube-api-access-jjxpn\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.578733 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.596931 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.601574 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.614780 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.628158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.628202 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.628212 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.628226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.628237 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.629612 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.643181 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.656168 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.670539 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.679492 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.679568 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjxpn\" (UniqueName: \"kubernetes.io/projected/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-kube-api-access-jjxpn\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:03 crc kubenswrapper[4970]: E0930 09:47:03.679620 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:03 crc kubenswrapper[4970]: E0930 09:47:03.679682 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:47:04.179665213 +0000 UTC m=+37.251516147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.683677 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.697329 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.697888 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjxpn\" (UniqueName: \"kubernetes.io/projected/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-kube-api-access-jjxpn\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.712082 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.722007 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.730123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.730163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.730173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.730188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.730204 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.735014 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.743916 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.759114 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.773016 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.789201 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4de7a0911657e286cacb168a9d5b6e0143618aad1f132f64c3f5d4d432fc0dc4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:00Z\\\",\\\"message\\\":\\\"eflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.334524 6227 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0930 09:47:00.334217 6227 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0930 09:47:00.334681 6227 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0930 09:47:00.334706 6227 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0930 09:47:00.334790 6227 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0930 09:47:00.334828 6227 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0930 09:47:00.334872 6227 handler.go:208] Removed *v1.Node event handler 7\\\\nI0930 09:47:00.334895 6227 handler.go:208] Removed *v1.Node event handler 2\\\\nI0930 09:47:00.335031 6227 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0930 09:47:00.335188 6227 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0930 09:47:00.335352 6227 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0930 09:47:00.335424 6227 factory.go:656] Stopping watch factory\\\\nI0930 09:47:00.335444 6227 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.799931 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.808507 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.817128 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:03Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.832953 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.833014 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.833028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.833042 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.833051 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.935114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.935169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.935235 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.935264 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:03 crc kubenswrapper[4970]: I0930 09:47:03.935281 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:03Z","lastTransitionTime":"2025-09-30T09:47:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.037727 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.038136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.038261 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.038343 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.038421 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.108533 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/1.log" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.116617 4970 scope.go:117] "RemoveContainer" containerID="36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.120583 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.140887 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.141509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.141600 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.141635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.141681 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.141702 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.158779 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.176925 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.184286 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.184901 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.185046 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:47:05.184974104 +0000 UTC m=+38.256825068 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.197306 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.217534 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.236293 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.244439 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.244479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.244489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.244506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.244517 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.254264 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.276176 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.310407 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.330847 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.347785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.347858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.347872 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.347893 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.348293 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.350541 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.366876 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.378716 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.391890 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.404901 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.416264 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.450867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.450909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.450921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.450938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.450972 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.486637 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.486838 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:47:20.486810922 +0000 UTC m=+53.558661876 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.553469 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.553509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.553523 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.553541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.553556 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.587704 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.587750 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.587791 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.587828 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.587944 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.587962 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.587974 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588045 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:20.588029984 +0000 UTC m=+53.659880938 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588396 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588512 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588590 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588702 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:20.588685732 +0000 UTC m=+53.660536666 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588397 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588843 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:20.588834386 +0000 UTC m=+53.660685310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.588401 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.589034 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:20.589023271 +0000 UTC m=+53.660874285 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.656297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.656349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.656357 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.656371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.656380 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.663801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.663966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.664124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.664250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.664379 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.668313 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.668362 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.668405 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.668455 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.668362 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.668526 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.676523 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.679205 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.679279 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.679291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.679308 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.679320 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.690829 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.693744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.693774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.693784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.693800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.693811 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.704977 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.707743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.707775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.707787 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.707803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.707815 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.719744 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.723043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.723075 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.723086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.723100 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.723110 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.734881 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:04Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:04 crc kubenswrapper[4970]: E0930 09:47:04.735079 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.760330 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.760378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.760390 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.760407 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.760425 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.862072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.862104 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.862113 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.862125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.862133 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.964581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.964613 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.964622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.964635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:04 crc kubenswrapper[4970]: I0930 09:47:04.964644 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:04Z","lastTransitionTime":"2025-09-30T09:47:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.069209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.069257 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.069267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.069282 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.069292 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.172574 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.172618 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.172627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.172650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.172661 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.193597 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:05 crc kubenswrapper[4970]: E0930 09:47:05.193900 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:05 crc kubenswrapper[4970]: E0930 09:47:05.194068 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:47:07.194041342 +0000 UTC m=+40.265892276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.276332 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.276391 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.276402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.276423 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.276440 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.379666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.379885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.380048 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.380246 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.380342 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.483485 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.483716 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.483826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.483908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.484036 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.586857 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.586896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.586908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.586926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.586938 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.668254 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:05 crc kubenswrapper[4970]: E0930 09:47:05.668485 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.690043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.690086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.690098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.690115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.690127 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.793124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.793223 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.793244 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.793275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.793298 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.896551 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.896657 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.896680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.896707 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.896725 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.999095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.999147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.999164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.999189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:05 crc kubenswrapper[4970]: I0930 09:47:05.999209 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:05Z","lastTransitionTime":"2025-09-30T09:47:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.102164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.102520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.102709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.102849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.102971 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.205371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.205420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.205437 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.205459 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.205474 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.308113 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.308166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.308182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.308200 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.308213 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.411631 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.411696 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.411712 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.411736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.411753 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.514920 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.514979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.515018 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.515045 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.515062 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.619023 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.619069 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.619087 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.619110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.619130 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.667853 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.667888 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.668072 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:06 crc kubenswrapper[4970]: E0930 09:47:06.668256 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:06 crc kubenswrapper[4970]: E0930 09:47:06.668430 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:06 crc kubenswrapper[4970]: E0930 09:47:06.668581 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.722160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.722204 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.722215 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.722233 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.722244 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.824947 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.825010 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.825023 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.825039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.825050 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.928658 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.928732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.928752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.928782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:06 crc kubenswrapper[4970]: I0930 09:47:06.928804 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:06Z","lastTransitionTime":"2025-09-30T09:47:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.032127 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.032209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.032227 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.032252 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.032268 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.140238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.140672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.140720 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.140762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.140798 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.217135 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:07 crc kubenswrapper[4970]: E0930 09:47:07.217454 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:07 crc kubenswrapper[4970]: E0930 09:47:07.217619 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:47:11.217587855 +0000 UTC m=+44.289438789 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.245265 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.245354 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.245380 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.245412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.245436 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.349297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.349407 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.349425 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.349452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.349471 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.452419 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.452564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.452609 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.452645 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.452671 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.555958 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.556043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.556061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.556085 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.556101 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.659443 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.659498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.659517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.659541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.659559 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.667869 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:07 crc kubenswrapper[4970]: E0930 09:47:07.668134 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.689909 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.712097 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.729383 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.743207 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.762058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.762148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.762168 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.762192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.762210 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.762101 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.780653 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.804378 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.824546 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.840266 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.854173 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.865114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.865166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.865180 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.865243 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.865259 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.869376 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.884474 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.899684 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.915703 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.932552 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.951378 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.968446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.968495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.968509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.968530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:07 crc kubenswrapper[4970]: I0930 09:47:07.968544 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:07Z","lastTransitionTime":"2025-09-30T09:47:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.071248 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.071322 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.071342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.071368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.071387 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.173926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.174025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.174070 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.174095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.174114 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.277103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.277148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.277158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.277175 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.277186 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.380480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.380552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.380576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.380607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.380629 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.485647 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.485719 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.485741 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.485770 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.485791 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.589590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.589669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.589690 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.589893 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.589910 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.667838 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.667838 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.667880 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:08 crc kubenswrapper[4970]: E0930 09:47:08.668115 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:08 crc kubenswrapper[4970]: E0930 09:47:08.668318 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:08 crc kubenswrapper[4970]: E0930 09:47:08.668458 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.692621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.692680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.692701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.692731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.692752 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.795057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.795134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.795150 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.795173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.795187 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.898565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.898633 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.898650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.898671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:08 crc kubenswrapper[4970]: I0930 09:47:08.898682 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:08Z","lastTransitionTime":"2025-09-30T09:47:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.002908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.003049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.003076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.003107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.003134 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.107439 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.107500 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.107517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.107545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.107564 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.210578 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.210649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.210672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.210703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.210724 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.313889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.313946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.313964 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.314013 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.314030 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.416910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.416973 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.416999 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.417020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.417030 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.520359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.520449 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.520476 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.520505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.520536 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.623098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.623153 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.623166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.623186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.623198 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.668128 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:09 crc kubenswrapper[4970]: E0930 09:47:09.668339 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.726188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.726250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.726271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.726296 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.726313 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.829304 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.829393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.829432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.829463 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.829488 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.932981 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.933179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.933213 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.933244 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:09 crc kubenswrapper[4970]: I0930 09:47:09.933266 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:09Z","lastTransitionTime":"2025-09-30T09:47:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.036547 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.036625 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.036649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.036679 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.036700 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.139297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.139361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.139378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.139401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.139419 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.243092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.243155 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.243167 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.243190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.243206 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.346608 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.346692 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.346714 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.346741 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.346760 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.449882 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.449915 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.449925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.449938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.449946 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.553347 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.553400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.553418 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.553443 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.553460 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.658124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.658532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.658543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.658563 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.658576 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.667585 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.667618 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.667606 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:10 crc kubenswrapper[4970]: E0930 09:47:10.667801 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:10 crc kubenswrapper[4970]: E0930 09:47:10.668023 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:10 crc kubenswrapper[4970]: E0930 09:47:10.668194 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.762357 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.762403 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.762424 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.762444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.762455 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.866482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.866552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.866570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.866594 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.866615 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.969478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.969553 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.969577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.969604 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:10 crc kubenswrapper[4970]: I0930 09:47:10.969626 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:10Z","lastTransitionTime":"2025-09-30T09:47:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.073940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.074078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.074098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.074125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.074142 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.177498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.177583 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.177603 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.177629 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.177651 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.278683 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:11 crc kubenswrapper[4970]: E0930 09:47:11.278819 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:11 crc kubenswrapper[4970]: E0930 09:47:11.278891 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:47:19.27886988 +0000 UTC m=+52.350720814 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.280178 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.280241 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.280265 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.280292 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.280313 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.383335 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.383395 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.383411 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.383439 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.383457 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.485702 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.485745 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.485757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.485774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.485785 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.588539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.588600 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.588616 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.588640 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.588659 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.668120 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:11 crc kubenswrapper[4970]: E0930 09:47:11.668334 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.691924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.691966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.691982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.692035 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.692051 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.795225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.795812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.795970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.796245 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.796545 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.900297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.900374 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.900397 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.900426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:11 crc kubenswrapper[4970]: I0930 09:47:11.900451 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:11Z","lastTransitionTime":"2025-09-30T09:47:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.004307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.004375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.004398 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.004432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.004454 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.107962 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.108545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.108708 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.108876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.109050 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.213196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.213668 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.213849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.214103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.214455 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.318646 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.318720 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.318743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.318774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.318797 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.421576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.421624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.421641 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.421665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.421684 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.524900 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.524950 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.524967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.525017 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.525035 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.628354 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.628420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.628440 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.628467 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.628489 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.667377 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.667458 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.667385 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:12 crc kubenswrapper[4970]: E0930 09:47:12.667571 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:12 crc kubenswrapper[4970]: E0930 09:47:12.667670 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:12 crc kubenswrapper[4970]: E0930 09:47:12.667759 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.730931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.731029 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.731056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.731084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.731104 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.834318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.834364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.834380 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.834403 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.834424 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.938092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.938169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.938190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.938218 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:12 crc kubenswrapper[4970]: I0930 09:47:12.938236 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:12Z","lastTransitionTime":"2025-09-30T09:47:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.040804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.040866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.040883 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.040908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.040925 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.143479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.143763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.143899 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.144011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.144097 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.247602 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.247665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.247684 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.247709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.247727 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.349686 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.349743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.349760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.349782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.349800 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.451896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.451967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.452019 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.452045 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.452063 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.555190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.555540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.555642 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.555750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.555847 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.658226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.658278 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.658295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.658313 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.658328 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.667664 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:13 crc kubenswrapper[4970]: E0930 09:47:13.667769 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.761547 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.761587 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.761597 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.761612 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.761623 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.864478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.864537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.864574 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.864607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.864631 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.967110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.967390 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.967482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.967607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:13 crc kubenswrapper[4970]: I0930 09:47:13.967694 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:13Z","lastTransitionTime":"2025-09-30T09:47:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.070811 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.071130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.071226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.071314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.071407 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.176152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.176226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.176248 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.176277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.176300 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.279449 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.279492 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.279504 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.279522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.279533 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.383235 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.383323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.383342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.383371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.383390 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.486475 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.486526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.486543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.486564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.486580 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.590056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.590157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.590177 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.590205 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.590224 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.668171 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.668239 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.668239 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.668841 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.668936 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.669111 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.669161 4970 scope.go:117] "RemoveContainer" containerID="36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.693475 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.693557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.693581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.693611 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.693634 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.796841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.796912 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.796936 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.796960 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.796978 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.830284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.830322 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.830331 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.830346 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.830355 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.845718 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:14Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.856020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.856071 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.856085 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.856106 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.856120 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.875616 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:14Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.884460 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.884500 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.884510 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.884527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.884536 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.908815 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:14Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.913321 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.913348 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.913358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.913373 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.913384 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.932073 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:14Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.935861 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.935929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.935954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.936015 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.936056 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.948313 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:14Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:14 crc kubenswrapper[4970]: E0930 09:47:14.948463 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.950636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.950686 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.950701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.950722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:14 crc kubenswrapper[4970]: I0930 09:47:14.950739 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:14Z","lastTransitionTime":"2025-09-30T09:47:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.053206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.053284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.053301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.053326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.053344 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.156558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.156641 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.156664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.156694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.156717 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.160406 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/1.log" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.163596 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.164513 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.187919 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.207334 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.222724 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.236783 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.252176 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.259206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.259268 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.259288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.259314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.259332 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.279276 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.303863 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.339635 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.394104 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.394136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.394145 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.394157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.394165 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.395215 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.408839 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.420745 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.436421 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.454388 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.470087 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.483691 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.495435 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.495666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.495706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.495720 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.495738 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.495751 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.598149 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.598196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.598210 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.598229 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.598242 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.668054 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:15 crc kubenswrapper[4970]: E0930 09:47:15.668258 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.701487 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.701544 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.701556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.701576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.701590 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.804207 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.804262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.804278 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.804298 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.804311 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.906924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.907179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.907194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.907214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:15 crc kubenswrapper[4970]: I0930 09:47:15.907229 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:15Z","lastTransitionTime":"2025-09-30T09:47:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.010757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.010813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.010825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.010842 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.010852 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.114386 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.114452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.114466 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.114485 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.114496 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.169692 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/2.log" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.170609 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/1.log" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.174696 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68" exitCode=1 Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.174772 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.174888 4970 scope.go:117] "RemoveContainer" containerID="36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.175957 4970 scope.go:117] "RemoveContainer" containerID="c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68" Sep 30 09:47:16 crc kubenswrapper[4970]: E0930 09:47:16.176209 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.193979 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.211499 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.217034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.217139 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.217160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.217186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.217208 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.230167 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.247311 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.261602 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.272553 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.282507 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.292044 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.307271 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.319233 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.319284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.319302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.319336 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.319353 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.320356 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.337246 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36051f8a27aabdca350246d522dc275acd1d03bc9488b64f348cbb94e4dd9a88\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:02Z\\\",\\\"message\\\":\\\"nport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0930 09:47:02.838014 6403 services_controller.go:444] Built service openshift-network-console/networking-console-plugin LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0930 09:47:02.838006 6403 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0930 09:47:02.837106 6403 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurr\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.349161 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.361794 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.371407 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.384916 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.394160 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:16Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.422201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.422235 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.422246 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.422260 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.422272 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.524828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.524891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.524908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.524930 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.524948 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.628404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.628462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.628481 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.628499 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.628509 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.667695 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.667739 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.667742 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:16 crc kubenswrapper[4970]: E0930 09:47:16.667863 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:16 crc kubenswrapper[4970]: E0930 09:47:16.668038 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:16 crc kubenswrapper[4970]: E0930 09:47:16.668101 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.731513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.731579 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.731602 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.731634 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.731658 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.834499 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.834599 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.834624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.834650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.834667 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.937922 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.938032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.938057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.938090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:16 crc kubenswrapper[4970]: I0930 09:47:16.938112 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:16Z","lastTransitionTime":"2025-09-30T09:47:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.040906 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.041031 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.041059 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.041099 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.041122 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.144438 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.144498 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.144515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.144537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.144554 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.181452 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/2.log" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.192257 4970 scope.go:117] "RemoveContainer" containerID="c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68" Sep 30 09:47:17 crc kubenswrapper[4970]: E0930 09:47:17.192554 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.210662 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.225978 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.244969 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.248174 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.248239 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.248254 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.248281 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.248304 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.266534 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.285307 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.300809 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.326908 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.345962 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.351318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.351368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.351378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.351398 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.351409 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.376604 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.394889 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.410896 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.432812 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.452310 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.454185 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.454250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.454434 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.454469 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.454496 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.468772 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.483588 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.502444 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.558152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.558220 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.558238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.558260 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.558272 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.661488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.661618 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.661642 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.661664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.661680 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.668139 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:17 crc kubenswrapper[4970]: E0930 09:47:17.668394 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.686162 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.702381 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.722779 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.741646 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.756028 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.763458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.763529 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.763551 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.763581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.763607 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.772039 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.788807 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.805853 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.818371 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.833950 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.855952 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.866513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.866546 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.866557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.866573 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.866586 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.889922 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.913631 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.934782 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.942808 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.951258 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.969138 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.969696 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.969910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.970258 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.970489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.971064 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:17Z","lastTransitionTime":"2025-09-30T09:47:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:17 crc kubenswrapper[4970]: I0930 09:47:17.990906 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:17Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.010422 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.024513 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.038605 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.051860 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.066666 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.075172 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.075210 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.075220 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.075234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.075249 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.084263 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.104093 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.119965 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.123752 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.132078 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.135432 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.150123 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.172850 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.177509 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.177567 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.177653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.177688 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.177712 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.189154 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.203211 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.220111 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.235042 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.251706 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.270917 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.286809 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.286923 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.286951 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.287062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.287091 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.292413 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.311060 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.335389 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.349256 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.365555 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.382774 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.390176 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.390224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.390236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.390254 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.390270 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.415946 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.440111 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.458729 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.471779 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.487548 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.492378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.492416 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.492427 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.492444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.492455 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.511196 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.532783 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.551253 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.573528 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:18Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.594738 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.594776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.594786 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.594802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.594814 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.668285 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.668285 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.668532 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:18 crc kubenswrapper[4970]: E0930 09:47:18.668517 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:18 crc kubenswrapper[4970]: E0930 09:47:18.668678 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:18 crc kubenswrapper[4970]: E0930 09:47:18.668798 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.696867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.696925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.696943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.696968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.697027 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.800086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.800145 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.800165 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.800193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.800216 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.903473 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.903532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.903555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.903582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:18 crc kubenswrapper[4970]: I0930 09:47:18.903603 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:18Z","lastTransitionTime":"2025-09-30T09:47:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.006912 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.007021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.007043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.007067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.007086 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.110437 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.110508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.110542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.110572 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.110593 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.213295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.213378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.213408 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.213436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.213456 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.316034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.316112 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.316136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.316166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.316190 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.369369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:19 crc kubenswrapper[4970]: E0930 09:47:19.369612 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:19 crc kubenswrapper[4970]: E0930 09:47:19.369755 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:47:35.369715404 +0000 UTC m=+68.441566428 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.418331 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.418414 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.418438 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.418469 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.418493 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.521508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.521553 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.521564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.521579 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.521592 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.624177 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.624211 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.624219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.624232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.624241 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.667605 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:19 crc kubenswrapper[4970]: E0930 09:47:19.667739 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.727349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.727419 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.727443 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.727470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.727492 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.830834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.830877 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.830889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.830905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.830916 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.933852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.934093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.934132 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.934151 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:19 crc kubenswrapper[4970]: I0930 09:47:19.934165 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:19Z","lastTransitionTime":"2025-09-30T09:47:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.036934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.037047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.037084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.037114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.037130 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.139790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.139845 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.139862 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.139889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.139940 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.242928 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.243032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.243056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.243082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.243100 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.346141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.346201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.346218 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.346241 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.346260 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.449400 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.449462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.449488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.449515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.449537 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.552094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.552155 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.552170 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.552193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.552214 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.584111 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.584340 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:47:52.584310279 +0000 UTC m=+85.656161223 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.654508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.654555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.654568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.654588 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.654600 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.668045 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.668082 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.668082 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.668270 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.668402 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.668524 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.685853 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.685940 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.685984 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.686109 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686118 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686148 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686163 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686166 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686225 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:52.68620423 +0000 UTC m=+85.758055184 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686219 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686286 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686245 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:52.68623691 +0000 UTC m=+85.758087854 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686318 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686336 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686342 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:52.686318613 +0000 UTC m=+85.758169567 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:47:20 crc kubenswrapper[4970]: E0930 09:47:20.686398 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:47:52.686376994 +0000 UTC m=+85.758227958 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.758193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.758252 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.758275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.758299 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.758316 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.861603 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.861671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.861695 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.861721 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.861738 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.964881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.964940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.964956 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.964981 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:20 crc kubenswrapper[4970]: I0930 09:47:20.965050 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:20Z","lastTransitionTime":"2025-09-30T09:47:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.068375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.068413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.068420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.068432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.068440 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.172156 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.172201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.172210 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.172223 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.172231 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.274889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.274955 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.274965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.275008 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.275024 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.377554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.377607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.377619 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.377639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.377652 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.480000 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.480046 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.480057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.480076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.480088 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.583223 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.583293 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.583318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.583358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.583382 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.668575 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:21 crc kubenswrapper[4970]: E0930 09:47:21.668719 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.685622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.685678 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.685704 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.685732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.685754 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.789072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.789123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.789134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.789152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.789163 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.891494 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.891578 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.891597 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.891621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.891640 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.994769 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.994812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.994822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.994836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:21 crc kubenswrapper[4970]: I0930 09:47:21.994844 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:21Z","lastTransitionTime":"2025-09-30T09:47:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.097094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.097161 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.097179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.097208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.097226 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.200202 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.200266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.200286 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.200310 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.200328 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.303186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.303229 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.303242 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.303260 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.303271 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.406078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.406137 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.406148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.406166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.406179 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.509312 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.509355 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.509391 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.509416 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.509426 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.611967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.612050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.612063 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.612080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.612093 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.668134 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.668134 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:22 crc kubenswrapper[4970]: E0930 09:47:22.668303 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:22 crc kubenswrapper[4970]: E0930 09:47:22.668411 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.668152 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:22 crc kubenswrapper[4970]: E0930 09:47:22.668516 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.715248 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.715317 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.715339 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.715368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.715388 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.817365 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.817419 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.817436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.817459 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.817475 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.920221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.920291 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.920302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.920323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:22 crc kubenswrapper[4970]: I0930 09:47:22.920337 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:22Z","lastTransitionTime":"2025-09-30T09:47:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.023876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.023939 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.023957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.023983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.024032 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.126808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.126857 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.126868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.126889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.126916 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.229759 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.229839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.229859 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.229890 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.229913 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.333961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.334043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.334057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.334078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.334091 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.437522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.437575 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.437592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.437614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.437631 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.543669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.543728 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.543744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.543778 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.543791 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.647669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.647774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.647791 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.647885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.647913 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.667783 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:23 crc kubenswrapper[4970]: E0930 09:47:23.668070 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.751568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.751643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.751667 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.751701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.751751 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.854370 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.854451 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.854470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.854507 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.854524 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.957903 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.957952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.957963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.957998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:23 crc kubenswrapper[4970]: I0930 09:47:23.958013 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:23Z","lastTransitionTime":"2025-09-30T09:47:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.061106 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.061180 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.061203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.061232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.061250 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.164673 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.164730 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.164740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.164760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.164771 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.267708 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.267756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.267767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.267785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.267796 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.370276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.370326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.370334 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.370353 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.370363 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.492121 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.492181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.492192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.492213 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.492227 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.598197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.598256 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.598273 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.598297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.598316 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.668404 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.668535 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.668853 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:24 crc kubenswrapper[4970]: E0930 09:47:24.668849 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:24 crc kubenswrapper[4970]: E0930 09:47:24.668942 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:24 crc kubenswrapper[4970]: E0930 09:47:24.668591 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.701297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.701367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.701379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.701399 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.701415 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.805431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.805510 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.805533 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.805562 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.805588 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.908798 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.908852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.908868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.908891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:24 crc kubenswrapper[4970]: I0930 09:47:24.908908 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:24Z","lastTransitionTime":"2025-09-30T09:47:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.011873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.011924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.011940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.011963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.011980 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.115511 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.115918 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.116136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.116311 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.116509 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.161918 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.161982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.162060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.162127 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.162148 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.182542 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:25Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.189180 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.189274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.189301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.189335 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.189356 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.213034 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:25Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.218476 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.218544 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.218563 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.218592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.218611 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.239056 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:25Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.243480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.243530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.243541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.243565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.243580 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.266255 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:25Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.271894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.271963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.271983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.272032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.272050 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.288100 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:25Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.288363 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.291232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.291320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.291347 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.291379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.291405 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.394846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.395348 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.395418 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.395506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.395577 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.498754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.498829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.498845 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.498866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.498881 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.602502 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.602572 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.602591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.602620 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.602642 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.667680 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:25 crc kubenswrapper[4970]: E0930 09:47:25.667938 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.705856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.705975 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.706025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.706107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.706127 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.809591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.809658 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.809676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.809702 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.809720 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.912739 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.912787 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.912797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.912819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:25 crc kubenswrapper[4970]: I0930 09:47:25.912831 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:25Z","lastTransitionTime":"2025-09-30T09:47:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.016907 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.016972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.016997 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.017021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.017036 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.119979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.120051 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.120064 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.120084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.120097 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.223638 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.223703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.223715 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.223737 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.223752 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.326535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.326650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.326664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.326722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.326736 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.429753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.429835 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.429852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.429875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.429888 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.532369 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.532417 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.532431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.532453 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.532468 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.635517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.635558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.635568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.635584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.635596 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.667552 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.667588 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:26 crc kubenswrapper[4970]: E0930 09:47:26.667652 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.667686 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:26 crc kubenswrapper[4970]: E0930 09:47:26.667799 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:26 crc kubenswrapper[4970]: E0930 09:47:26.668119 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.738446 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.738519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.738543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.738574 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.738592 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.841451 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.841528 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.841550 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.841580 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.841602 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.945081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.945145 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.945162 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.945185 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:26 crc kubenswrapper[4970]: I0930 09:47:26.945202 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:26Z","lastTransitionTime":"2025-09-30T09:47:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.047963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.048093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.048116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.048147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.048170 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.151761 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.151841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.151866 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.151897 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.151920 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.255141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.255231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.255247 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.255269 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.255286 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.358410 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.358493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.358514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.358548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.358571 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.461268 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.461342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.461360 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.461384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.461402 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.565123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.565192 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.565219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.565248 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.565271 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.667430 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:27 crc kubenswrapper[4970]: E0930 09:47:27.667912 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.670364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.670731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.671632 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.671959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.672145 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.691707 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.706557 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.725173 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.745796 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.776771 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.776863 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.776881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.776907 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.776959 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.781262 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.802197 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.823636 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.845193 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.867638 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.878981 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.879063 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.879081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.879106 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.879125 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.889339 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.911880 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.931514 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.953700 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.972672 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.983945 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.984049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.984074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.984115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.984143 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:27Z","lastTransitionTime":"2025-09-30T09:47:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:27 crc kubenswrapper[4970]: I0930 09:47:27.990834 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:27Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.011382 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:28Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.026108 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:28Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.087477 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.087551 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.087579 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.087642 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.087667 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.191286 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.191371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.191395 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.191483 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.191512 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.294800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.294873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.294894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.294925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.294948 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.397570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.397627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.397644 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.397669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.397684 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.499967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.500062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.500081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.500103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.500118 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.603522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.603576 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.603593 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.603615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.603632 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.667815 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.667897 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:28 crc kubenswrapper[4970]: E0930 09:47:28.668057 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.667896 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:28 crc kubenswrapper[4970]: E0930 09:47:28.668238 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:28 crc kubenswrapper[4970]: E0930 09:47:28.668500 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.669481 4970 scope.go:117] "RemoveContainer" containerID="c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68" Sep 30 09:47:28 crc kubenswrapper[4970]: E0930 09:47:28.669849 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.706880 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.706931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.706948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.706974 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.707026 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.810349 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.810426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.810444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.810471 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.810489 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.913646 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.913719 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.913737 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.913766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:28 crc kubenswrapper[4970]: I0930 09:47:28.913786 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:28Z","lastTransitionTime":"2025-09-30T09:47:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.017830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.017909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.017946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.017969 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.018017 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.120423 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.120478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.120496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.120519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.120539 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.223412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.223452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.223464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.223481 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.223493 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.326182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.326235 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.326254 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.326274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.326289 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.428797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.428829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.428837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.428853 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.428862 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.531648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.531725 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.531747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.531773 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.531797 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.635503 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.635564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.635581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.635607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.635623 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.668132 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:29 crc kubenswrapper[4970]: E0930 09:47:29.668352 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.738744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.738803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.738815 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.738833 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.738846 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.841461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.841510 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.841522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.841540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.841550 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.947180 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.947244 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.947262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.947287 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:29 crc kubenswrapper[4970]: I0930 09:47:29.947305 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:29Z","lastTransitionTime":"2025-09-30T09:47:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.050087 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.050147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.050162 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.050186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.050205 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.153399 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.153458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.153482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.153511 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.153536 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.257261 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.257327 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.257350 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.257376 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.257394 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.361468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.361526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.361543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.361577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.361677 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.465571 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.465614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.465627 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.465646 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.465659 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.568010 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.568040 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.568048 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.568061 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.568089 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.668167 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:30 crc kubenswrapper[4970]: E0930 09:47:30.668342 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.668463 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.668547 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:30 crc kubenswrapper[4970]: E0930 09:47:30.668639 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:30 crc kubenswrapper[4970]: E0930 09:47:30.668752 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.670741 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.670803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.670814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.670836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.670852 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.774514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.774588 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.774610 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.774639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.774660 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.878155 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.878253 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.878271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.878332 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.878352 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.981214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.981283 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.981300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.981321 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:30 crc kubenswrapper[4970]: I0930 09:47:30.981337 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:30Z","lastTransitionTime":"2025-09-30T09:47:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.084104 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.084160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.084173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.084193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.084207 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.187179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.187232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.187247 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.187266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.187280 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.290081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.290137 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.290151 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.290180 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.290194 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.392831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.392874 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.392892 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.392909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.392921 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.495972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.496094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.496112 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.496135 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.496152 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.599495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.599555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.599570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.599603 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.599620 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.667981 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:31 crc kubenswrapper[4970]: E0930 09:47:31.668248 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.702669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.702730 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.702752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.702775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.702791 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.807825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.807888 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.807903 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.807930 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.807945 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.911891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.911957 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.911970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.912014 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:31 crc kubenswrapper[4970]: I0930 09:47:31.912034 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:31Z","lastTransitionTime":"2025-09-30T09:47:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.016050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.016098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.016110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.016130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.016147 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.121092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.122337 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.122357 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.122378 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.122394 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.226134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.226188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.226201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.226221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.226233 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.329911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.329965 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.329976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.330050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.330066 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.433867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.433966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.434036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.434083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.434103 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.537096 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.537187 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.537207 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.537231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.537247 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.639733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.639854 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.639873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.639900 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.639917 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.667798 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:32 crc kubenswrapper[4970]: E0930 09:47:32.667932 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.667802 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.667798 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:32 crc kubenswrapper[4970]: E0930 09:47:32.668039 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:32 crc kubenswrapper[4970]: E0930 09:47:32.668256 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.743234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.743376 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.743440 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.743464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.743479 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.850663 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.850731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.850753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.850783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.850804 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.953940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.953998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.954011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.954027 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:32 crc kubenswrapper[4970]: I0930 09:47:32.954040 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:32Z","lastTransitionTime":"2025-09-30T09:47:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.056580 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.056642 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.056660 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.056685 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.056757 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.159530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.159605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.159626 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.159655 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.159678 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.262707 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.262754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.262765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.262780 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.262792 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.364794 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.364856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.364869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.364886 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.364896 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.467717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.467752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.467760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.467774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.467781 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.572084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.572128 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.572138 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.572154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.572176 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.668504 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:33 crc kubenswrapper[4970]: E0930 09:47:33.668631 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.674109 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.674175 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.674194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.674216 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.674233 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.776827 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.776873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.776884 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.776901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.776911 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.879494 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.879535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.879543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.879557 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.879564 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.981701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.981739 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.981750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.981764 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:33 crc kubenswrapper[4970]: I0930 09:47:33.981773 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:33Z","lastTransitionTime":"2025-09-30T09:47:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.083929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.083970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.083998 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.084012 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.084021 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.186754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.186864 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.186885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.186911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.186928 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.288731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.288766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.288774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.288789 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.288798 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.391660 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.391748 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.391763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.391788 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.391812 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.494276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.494315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.494330 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.494345 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.494355 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.596787 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.596873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.596888 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.596914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.596932 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.667711 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.667746 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.667717 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:34 crc kubenswrapper[4970]: E0930 09:47:34.668138 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:34 crc kubenswrapper[4970]: E0930 09:47:34.668217 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:34 crc kubenswrapper[4970]: E0930 09:47:34.668275 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.679904 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.699632 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.699681 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.699696 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.699717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.699732 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.802197 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.802253 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.802265 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.802285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.802298 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.904767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.904808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.904819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.904834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:34 crc kubenswrapper[4970]: I0930 09:47:34.904846 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:34Z","lastTransitionTime":"2025-09-30T09:47:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.007757 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.007815 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.007829 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.007848 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.007862 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.110831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.110889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.110907 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.110931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.110948 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.213425 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.213479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.213493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.213510 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.213525 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.317766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.317836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.317852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.317876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.317893 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.332725 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.332808 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.332825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.332853 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.332874 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.359668 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:35Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.371669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.371707 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.371717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.371734 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.371746 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.387752 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:35Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.393323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.393358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.393369 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.393388 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.393401 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.407485 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:35Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.412148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.412214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.412223 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.412236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.412245 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.424581 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:35Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.429643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.429783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.429806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.429834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.429858 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.442910 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:35Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.443170 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.445472 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.445577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.445595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.445622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.445646 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.456182 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.456357 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.456466 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:48:07.456440308 +0000 UTC m=+100.528291282 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.549455 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.549505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.549517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.549540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.549553 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.653563 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.653617 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.653629 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.653650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.653662 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.667504 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:35 crc kubenswrapper[4970]: E0930 09:47:35.667707 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.756855 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.757053 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.757067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.757145 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.757222 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.878296 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.878352 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.878375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.878403 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.878422 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.981463 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.981517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.981532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.981554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:35 crc kubenswrapper[4970]: I0930 09:47:35.981570 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:35Z","lastTransitionTime":"2025-09-30T09:47:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.087240 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.087285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.087302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.087334 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.087353 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.189474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.189580 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.189598 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.189621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.189638 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.291853 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.291893 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.291901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.291917 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.291947 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.395495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.395535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.395545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.395559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.395570 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.498300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.498381 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.498401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.498428 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.498446 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.601258 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.601308 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.601320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.601341 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.601353 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.668004 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.668041 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.668107 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:36 crc kubenswrapper[4970]: E0930 09:47:36.668224 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:36 crc kubenswrapper[4970]: E0930 09:47:36.668347 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:36 crc kubenswrapper[4970]: E0930 09:47:36.668382 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.703111 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.703163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.703176 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.703198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.703211 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.805855 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.805880 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.805887 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.805899 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.805908 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.909334 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.909437 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.909455 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.909486 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:36 crc kubenswrapper[4970]: I0930 09:47:36.909506 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:36Z","lastTransitionTime":"2025-09-30T09:47:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.012224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.012274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.012288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.012328 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.012344 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.116122 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.116181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.116191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.116212 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.116223 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.218784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.218841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.218851 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.218871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.218885 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.322404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.322449 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.322458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.322474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.322486 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.426054 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.426114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.426133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.426156 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.426173 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.529946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.530004 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.530015 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.530039 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.530051 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.633724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.633781 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.633799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.633824 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.633843 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.667754 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:37 crc kubenswrapper[4970]: E0930 09:47:37.667975 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.687555 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.708524 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.722564 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.735672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.735703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.735711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.735725 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.735734 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.743580 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.761977 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.778976 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.792299 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.810015 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.824692 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.838187 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.838225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.838234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.838250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.838261 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.840953 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.856773 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.869083 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.882231 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.894101 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.913872 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.929803 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.940080 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.940111 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.940119 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.940132 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.940141 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:37Z","lastTransitionTime":"2025-09-30T09:47:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.943981 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:37 crc kubenswrapper[4970]: I0930 09:47:37.956663 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:37Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.042060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.042099 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.042110 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.042126 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.042136 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.144870 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.144936 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.144948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.144968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.144981 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.247739 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.247812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.247822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.247844 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.247857 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.271740 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/0.log" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.271786 4970 generic.go:334] "Generic (PLEG): container finished" podID="adc4e528-ad76-4673-925a-f4f932e1ac51" containerID="6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430" exitCode=1 Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.271814 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerDied","Data":"6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.272187 4970 scope.go:117] "RemoveContainer" containerID="6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.289697 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.305549 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.319169 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.335649 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.348611 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.351933 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.351959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.351968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.351982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.352010 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.360800 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.370526 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.383125 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.399272 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:38Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.419483 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.430673 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.445881 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.454021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.454047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.454057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.454072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.454084 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.457857 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.467560 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.481692 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.495899 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.510452 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.525268 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:38Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.556147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.556257 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.556276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.556300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.556317 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.658637 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.658725 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.658743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.658776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.658821 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.668032 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.668187 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:38 crc kubenswrapper[4970]: E0930 09:47:38.668371 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.668402 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:38 crc kubenswrapper[4970]: E0930 09:47:38.668564 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:38 crc kubenswrapper[4970]: E0930 09:47:38.668729 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.761816 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.761846 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.761854 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.761867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.761876 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.863765 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.863813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.863823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.863838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.863849 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.966832 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.966871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.966882 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.966901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:38 crc kubenswrapper[4970]: I0930 09:47:38.966922 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:38Z","lastTransitionTime":"2025-09-30T09:47:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.069918 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.069983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.070037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.070062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.070081 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.173238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.173284 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.173293 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.173307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.173316 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.276496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.276618 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.276646 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.276676 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.276699 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.278370 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/0.log" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.278460 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerStarted","Data":"74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.298656 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.318031 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.338198 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.358676 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.377498 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.379684 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.379723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.379731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.379746 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.379757 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.395151 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.411354 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.427066 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.443689 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.463410 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.474430 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.481915 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.481940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.481947 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.481961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.481969 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.491866 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.507261 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.522652 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.534683 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.550575 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.564932 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.577896 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:39Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.584426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.584492 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.584514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.584544 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.584565 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.667478 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:39 crc kubenswrapper[4970]: E0930 09:47:39.667657 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.686472 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.686741 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.686894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.687111 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.687321 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.790098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.790151 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.790167 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.790189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.790207 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.893353 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.893377 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.893386 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.893406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.893415 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.995910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.995949 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.995961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.995976 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:39 crc kubenswrapper[4970]: I0930 09:47:39.996002 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:39Z","lastTransitionTime":"2025-09-30T09:47:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.099817 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.099884 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.099901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.099926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.099944 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.202302 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.202356 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.202367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.202384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.202396 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.305433 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.305491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.305508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.305534 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.305550 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.407777 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.408191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.408419 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.408643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.408855 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.511241 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.511325 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.511335 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.511358 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.511372 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.614644 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.614698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.614710 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.614732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.614747 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.668532 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.668716 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:40 crc kubenswrapper[4970]: E0930 09:47:40.668901 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.668932 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:40 crc kubenswrapper[4970]: E0930 09:47:40.669401 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:40 crc kubenswrapper[4970]: E0930 09:47:40.669525 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.669950 4970 scope.go:117] "RemoveContainer" containerID="c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.716812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.716858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.716871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.716892 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.716903 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.819659 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.819718 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.819734 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.819760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.819778 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.921766 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.921796 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.921804 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.921819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:40 crc kubenswrapper[4970]: I0930 09:47:40.921829 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:40Z","lastTransitionTime":"2025-09-30T09:47:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.023666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.023700 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.023711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.023725 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.023736 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.126921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.126972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.127000 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.127021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.127035 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.229680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.229717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.229726 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.229739 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.229750 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.289605 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/2.log" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.293547 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.294060 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.313834 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.330216 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.332139 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.332191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.332206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.332225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.332239 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.348470 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.368566 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.385492 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.397379 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.409170 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.421347 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.432555 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.435031 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.435065 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.435076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.435090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.435100 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.448163 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.461075 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.474236 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.488053 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.502650 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.516363 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.537224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.537265 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.537276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.537294 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.537305 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.540683 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.557272 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.573519 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:41Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.639336 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.639381 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.639393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.639409 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.639421 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.668131 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:41 crc kubenswrapper[4970]: E0930 09:47:41.668257 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.745272 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.745345 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.745367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.745396 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.745418 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.848653 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.848691 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.848702 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.848718 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.848732 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.951582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.951621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.951630 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.951643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:41 crc kubenswrapper[4970]: I0930 09:47:41.951652 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:41Z","lastTransitionTime":"2025-09-30T09:47:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.053733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.053762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.053769 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.053781 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.053789 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.156590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.156672 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.156699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.156730 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.156756 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.258592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.258661 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.258677 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.258699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.258715 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.298509 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/3.log" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.299347 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/2.log" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.302865 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" exitCode=1 Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.302908 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.302945 4970 scope.go:117] "RemoveContainer" containerID="c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.304317 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:47:42 crc kubenswrapper[4970]: E0930 09:47:42.304721 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.327151 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.342071 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.351833 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.362412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.362513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.362530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.362553 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.362570 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.363918 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.378475 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.393484 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.410599 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c69d166bd00835352d90ae38fc8a9d71861f0688627bd19c5cf99bd60808bf68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:15Z\\\",\\\"message\\\":\\\"ocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF0930 09:47:15.819884 6628 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:15Z is after 2025-08-24T17:21:41Z]\\\\nI0930 09:47:15.819896 6628 lb_config.go:1031] Cluster endpoints for openshift-marketplace/redhat-marketplace for network=default are: map[]\\\\nI0930 09:47:15.819906 6628 services_controller.go:443] Built service openshift-marketplace/redhat-marketplace LB cluster-wide configs for networ\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:41Z\\\",\\\"message\\\":\\\"generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 09:47:41.545127 6982 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI0930 09:47:41.545133 6982 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.545865ms\\\\nI0930 09:47:41.545140 6982 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nF0930 09:47:41.545133 6982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.423186 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.437095 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.450713 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.465191 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.465250 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.465266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.465289 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.465583 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.466431 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.477810 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.487708 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.501597 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.517946 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.536343 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.549869 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.566950 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:42Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.568546 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.568590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.568606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.568626 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.568643 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.667650 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.667665 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:42 crc kubenswrapper[4970]: E0930 09:47:42.667840 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:42 crc kubenswrapper[4970]: E0930 09:47:42.667927 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.667691 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:42 crc kubenswrapper[4970]: E0930 09:47:42.668196 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.671461 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.671491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.671502 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.671515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.671527 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.775123 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.775169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.775177 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.775194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.775203 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.878226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.878304 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.878320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.878337 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.878350 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.981879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.981942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.981953 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.981978 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:42 crc kubenswrapper[4970]: I0930 09:47:42.982010 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:42Z","lastTransitionTime":"2025-09-30T09:47:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.085520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.085667 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.085697 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.085731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.085755 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.189133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.189198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.189215 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.189243 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.189329 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.292148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.292214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.292232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.292260 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.292277 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.308655 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/3.log" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.312723 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:47:43 crc kubenswrapper[4970]: E0930 09:47:43.312934 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.337507 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.353192 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.366234 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.382793 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.394664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.394753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.394801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.394826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.394845 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.405832 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:41Z\\\",\\\"message\\\":\\\"generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 09:47:41.545127 6982 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI0930 09:47:41.545133 6982 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.545865ms\\\\nI0930 09:47:41.545140 6982 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nF0930 09:47:41.545133 6982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.421262 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.441313 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.459680 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.481346 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.494628 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.497354 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.497416 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.497439 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.497472 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.497496 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.517752 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.533246 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.545579 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.561423 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.577044 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.592071 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.600869 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.600916 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.600929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.600949 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.600962 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.610955 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.627930 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:43Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.668054 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:43 crc kubenswrapper[4970]: E0930 09:47:43.668249 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.703972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.704114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.704136 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.704163 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.704186 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.807722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.807762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.807772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.807788 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.807799 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.911044 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.911114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.911140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.911166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:43 crc kubenswrapper[4970]: I0930 09:47:43.911183 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:43Z","lastTransitionTime":"2025-09-30T09:47:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.014940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.015067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.015085 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.015109 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.015127 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.118230 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.118295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.118312 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.118336 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.118356 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.221838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.221916 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.221940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.221972 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.222033 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.329028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.329094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.329107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.329128 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.329144 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.432598 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.432679 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.432698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.432724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.432741 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.536580 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.536655 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.536674 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.536703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.536725 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.639979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.640141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.640202 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.640231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.640249 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.667894 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.668072 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:44 crc kubenswrapper[4970]: E0930 09:47:44.668158 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.668083 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:44 crc kubenswrapper[4970]: E0930 09:47:44.668271 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:44 crc kubenswrapper[4970]: E0930 09:47:44.668473 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.744209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.744272 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.744289 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.744317 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.744337 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.847410 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.847479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.847497 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.847526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.847543 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.950526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.950570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.950582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.950598 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:44 crc kubenswrapper[4970]: I0930 09:47:44.950611 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:44Z","lastTransitionTime":"2025-09-30T09:47:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.053196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.053228 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.053272 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.053288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.053297 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.156130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.156168 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.156185 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.156205 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.156220 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.258740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.258794 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.258811 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.258834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.258850 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.361467 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.361535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.361555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.361582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.361602 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.464826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.464885 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.464910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.464939 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.464958 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.538047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.538099 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.538116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.538137 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.538153 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.556529 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:45Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.561347 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.561466 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.561491 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.561522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.561546 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.582894 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:45Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.588011 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.588065 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.588085 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.588114 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.588135 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.607711 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:45Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.612648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.612712 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.612732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.612760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.612781 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.627767 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:45Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.632650 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.632703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.632908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.632944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.632968 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.653270 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:45Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.653484 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.655738 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.655795 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.655812 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.655837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.655854 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.668292 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:45 crc kubenswrapper[4970]: E0930 09:47:45.668459 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.759190 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.759261 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.759274 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.759297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.759311 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.862913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.862983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.863070 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.863097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.863114 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.966360 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.966473 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.966495 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.966536 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:45 crc kubenswrapper[4970]: I0930 09:47:45.966566 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:45Z","lastTransitionTime":"2025-09-30T09:47:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.070151 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.070198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.070215 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.070238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.070255 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.173263 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.173324 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.173340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.173371 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.173394 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.276099 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.276179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.276198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.276223 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.276241 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.379026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.379096 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.379119 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.379147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.379168 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.482760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.482836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.482857 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.482890 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.482925 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.585908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.586026 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.586060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.586091 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.586112 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.667504 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.667547 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.667661 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:46 crc kubenswrapper[4970]: E0930 09:47:46.667847 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:46 crc kubenswrapper[4970]: E0930 09:47:46.668066 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:46 crc kubenswrapper[4970]: E0930 09:47:46.668170 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.689370 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.689454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.689476 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.689504 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.689526 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.792458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.792521 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.792537 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.792562 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.792579 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.895510 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.895600 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.895617 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.895640 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.895660 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.999042 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.999115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.999134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.999158 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:46 crc kubenswrapper[4970]: I0930 09:47:46.999178 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:46Z","lastTransitionTime":"2025-09-30T09:47:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.102749 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.102806 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.102830 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.102858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.102879 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.205165 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.205236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.205255 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.205280 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.205297 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.307952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.308025 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.308037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.308055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.308068 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.412173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.412238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.412256 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.412280 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.412296 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.515179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.515289 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.515308 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.515338 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.515373 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.617795 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.617822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.617831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.617847 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.617856 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.668532 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:47 crc kubenswrapper[4970]: E0930 09:47:47.668745 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.687262 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.704497 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.719711 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.719763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.719782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.719803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.719819 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.723223 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.736355 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.752258 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.764411 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.791261 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:41Z\\\",\\\"message\\\":\\\"generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 09:47:41.545127 6982 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI0930 09:47:41.545133 6982 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.545865ms\\\\nI0930 09:47:41.545140 6982 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nF0930 09:47:41.545133 6982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.809788 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.822173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.822226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.822235 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.822249 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.822258 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.824555 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.839167 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.852196 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.865134 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.879087 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.896209 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.913760 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.924803 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.924842 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.924854 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.924873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.924885 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:47Z","lastTransitionTime":"2025-09-30T09:47:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.932310 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.948183 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:47 crc kubenswrapper[4970]: I0930 09:47:47.965508 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:47Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.028595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.028643 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.028656 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.028675 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.028687 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.131687 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.132038 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.132057 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.132078 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.132093 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.235314 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.235390 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.235414 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.235445 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.235466 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.339312 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.339372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.339386 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.339406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.339434 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.442483 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.442538 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.442555 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.442579 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.442597 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.545633 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.545682 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.545699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.545723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.545741 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.648450 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.648510 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.648527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.648550 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.648567 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.668268 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.668338 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.668352 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:48 crc kubenswrapper[4970]: E0930 09:47:48.668467 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:48 crc kubenswrapper[4970]: E0930 09:47:48.668630 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:48 crc kubenswrapper[4970]: E0930 09:47:48.668750 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.751251 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.751311 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.751328 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.751351 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.751368 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.854837 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.854920 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.854943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.854974 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.855043 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.957161 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.957193 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.957201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.957213 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:48 crc kubenswrapper[4970]: I0930 09:47:48.957222 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:48Z","lastTransitionTime":"2025-09-30T09:47:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.060083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.060133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.060144 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.060161 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.060172 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.163293 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.163359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.163379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.163402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.163418 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.266007 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.266047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.266055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.266069 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.266077 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.368917 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.368978 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.369037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.369067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.369089 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.472581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.472663 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.472686 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.472719 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.472741 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.576160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.576225 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.576241 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.576265 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.576283 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.668465 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:49 crc kubenswrapper[4970]: E0930 09:47:49.668590 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.678226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.678303 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.678326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.678353 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.678378 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.781196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.781231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.781239 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.781270 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.781280 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.884729 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.884784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.884802 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.884826 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.884842 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.988497 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.988572 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.988587 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.988605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:49 crc kubenswrapper[4970]: I0930 09:47:49.988617 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:49Z","lastTransitionTime":"2025-09-30T09:47:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.091503 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.091575 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.091593 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.091621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.091642 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.195438 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.195501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.195515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.195536 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.195553 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.298541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.298603 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.298622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.298648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.298666 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.401942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.402034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.402052 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.402075 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.402094 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.505062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.505130 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.505149 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.505179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.505196 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.608308 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.608370 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.608393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.608423 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.608447 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.668092 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.668092 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:50 crc kubenswrapper[4970]: E0930 09:47:50.668320 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:50 crc kubenswrapper[4970]: E0930 09:47:50.668449 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.668124 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:50 crc kubenswrapper[4970]: E0930 09:47:50.668574 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.711559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.711635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.711659 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.711690 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.711712 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.815238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.815372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.815395 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.815434 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.815468 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.918584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.918714 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.918741 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.918771 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:50 crc kubenswrapper[4970]: I0930 09:47:50.918796 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:50Z","lastTransitionTime":"2025-09-30T09:47:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.022214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.022282 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.022301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.022329 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.022351 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.125677 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.125723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.125733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.125748 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.125759 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.229535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.229596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.229631 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.229657 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.229676 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.334224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.334290 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.334309 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.334334 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.334352 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.437275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.437327 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.437338 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.437356 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.437368 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.539664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.539723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.539744 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.539774 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.539796 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.642520 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.643292 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.643372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.643444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.643515 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.667764 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:51 crc kubenswrapper[4970]: E0930 09:47:51.667891 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.746258 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.746309 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.746320 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.746339 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.746351 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.849176 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.849252 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.849277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.849307 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.849327 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.952577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.952862 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.952945 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.953087 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:51 crc kubenswrapper[4970]: I0930 09:47:51.953172 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:51Z","lastTransitionTime":"2025-09-30T09:47:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.055963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.056076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.056094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.056120 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.056141 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.158462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.158535 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.158559 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.158596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.158653 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.261934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.262207 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.262479 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.262710 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.262907 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.401740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.402112 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.402357 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.402657 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.402859 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.505849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.505902 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.505923 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.505941 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.505981 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.609363 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.609421 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.609431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.609456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.609470 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.647180 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.647416 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.647379701 +0000 UTC m=+149.719230695 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.667794 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.667845 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.667895 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.667980 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.668202 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.668398 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.713629 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.713681 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.713699 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.713722 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.713739 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.748975 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.749110 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.749172 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.749233 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749275 4970 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749370 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.749347984 +0000 UTC m=+149.821199008 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749395 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749399 4970 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749423 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749428 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749455 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.749444426 +0000 UTC m=+149.821295360 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749446 4970 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749468 4970 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749490 4970 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749515 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.749506568 +0000 UTC m=+149.821357622 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:52 crc kubenswrapper[4970]: E0930 09:47:52.749571 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.749538019 +0000 UTC m=+149.821388993 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.817961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.818022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.818038 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.818064 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.818076 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.921150 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.921212 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.921227 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.921248 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:52 crc kubenswrapper[4970]: I0930 09:47:52.921262 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:52Z","lastTransitionTime":"2025-09-30T09:47:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.024036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.024097 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.024116 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.024139 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.024157 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.126499 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.126541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.126552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.126567 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.126577 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.229665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.229717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.229732 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.229756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.229771 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.333838 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.333896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.333913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.333937 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.333954 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.436913 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.436991 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.437005 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.437062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.437078 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.540326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.540385 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.540406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.540432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.540452 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.643171 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.643237 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.643253 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.643277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.643292 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.668464 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:53 crc kubenswrapper[4970]: E0930 09:47:53.668627 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.745768 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.745814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.745825 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.745852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.745863 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.848401 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.848465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.848482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.848508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.848525 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.952231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.952297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.952315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.952340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:53 crc kubenswrapper[4970]: I0930 09:47:53.952358 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:53Z","lastTransitionTime":"2025-09-30T09:47:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.054858 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.054928 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.054946 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.054970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.054991 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.158460 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.159064 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.159089 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.159120 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.159146 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.261942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.262047 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.262069 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.262100 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.262124 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.365252 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.365317 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.365335 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.365359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.365377 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.468513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.468592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.468614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.468645 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.468665 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.571456 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.571496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.571505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.571521 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.571531 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.667920 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.668109 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.668366 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:54 crc kubenswrapper[4970]: E0930 09:47:54.668357 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:54 crc kubenswrapper[4970]: E0930 09:47:54.668543 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:54 crc kubenswrapper[4970]: E0930 09:47:54.668774 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.674508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.674552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.674570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.674593 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.674610 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.777909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.777970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.777985 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.778031 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.778047 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.881454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.881526 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.881542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.881568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.881587 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.984089 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.984173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.984199 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.984230 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:54 crc kubenswrapper[4970]: I0930 09:47:54.984250 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:54Z","lastTransitionTime":"2025-09-30T09:47:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.087652 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.087716 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.087738 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.087762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.087779 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.191549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.191619 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.191640 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.191665 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.191682 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.295431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.295493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.295524 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.295554 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.295575 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.398919 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.398983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.399062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.399124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.399149 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.504159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.504221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.504240 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.504267 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.504293 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.606875 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.606921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.606936 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.606952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.606966 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.668292 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:55 crc kubenswrapper[4970]: E0930 09:47:55.668490 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.710181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.710244 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.710263 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.710286 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.710304 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.813833 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.813916 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.813940 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.813970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.814033 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.931841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.931900 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.931920 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.931944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:55 crc kubenswrapper[4970]: I0930 09:47:55.931961 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:55Z","lastTransitionTime":"2025-09-30T09:47:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.025090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.025147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.025164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.025186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.025207 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.048359 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.053769 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.053836 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.053859 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.053883 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.053903 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.075535 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.081128 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.081195 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.081214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.081239 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.081260 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.103293 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.108805 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.108862 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.108878 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.108904 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.108921 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.130269 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.141155 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.141217 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.141234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.141258 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.141274 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.158217 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:56Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.158439 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.160698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.160762 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.160784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.160813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.160834 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.263828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.263871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.263889 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.263914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.263933 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.366708 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.366773 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.366785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.366801 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.366814 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.469591 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.469647 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.469656 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.469678 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.469691 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.573362 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.573420 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.573434 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.573454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.573469 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.667723 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.667873 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.667926 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.668280 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.668439 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:56 crc kubenswrapper[4970]: E0930 09:47:56.668598 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.676247 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.676300 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.676316 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.676340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.676359 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.778855 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.778914 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.778934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.778961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.778979 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.881953 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.882050 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.882068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.882090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.882110 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.985528 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.985605 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.985628 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.985658 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:56 crc kubenswrapper[4970]: I0930 09:47:56.985679 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:56Z","lastTransitionTime":"2025-09-30T09:47:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.089301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.089353 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.089369 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.089392 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.089411 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.191669 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.191740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.191750 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.191763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.191773 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.294126 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.294224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.294262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.294293 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.294318 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.397488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.397566 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.397589 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.397620 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.397645 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.500724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.500782 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.500799 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.500821 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.500838 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.604365 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.604413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.604429 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.604452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.604468 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.667493 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:57 crc kubenswrapper[4970]: E0930 09:47:57.667689 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.669128 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:47:57 crc kubenswrapper[4970]: E0930 09:47:57.669432 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.691634 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.707633 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.707729 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.707755 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.707827 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.707847 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.713736 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.735455 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.755092 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.776930 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.794516 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.810566 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.811056 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.811115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.811133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.811160 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.811185 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.837127 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.863111 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.882717 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.899752 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.913867 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.913931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.913968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.914044 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.914058 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:57Z","lastTransitionTime":"2025-09-30T09:47:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.915603 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.947581 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:57 crc kubenswrapper[4970]: I0930 09:47:57.970945 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.002746 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:41Z\\\",\\\"message\\\":\\\"generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 09:47:41.545127 6982 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI0930 09:47:41.545133 6982 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.545865ms\\\\nI0930 09:47:41.545140 6982 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nF0930 09:47:41.545133 6982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:57Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.017238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.017295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.017315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.017344 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.017364 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.020742 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.036132 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.054972 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:47:58Z is after 2025-08-24T17:21:41Z" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.119818 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.119905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.119930 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.119959 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.119984 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.222787 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.222860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.222876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.222905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.222926 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.326092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.326181 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.326204 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.326236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.326260 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.428958 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.429068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.429093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.429118 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.429135 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.532860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.532943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.532963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.533022 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.533048 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.637443 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.637505 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.637530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.637558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.637579 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.668234 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.668287 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.668253 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:47:58 crc kubenswrapper[4970]: E0930 09:47:58.668516 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:47:58 crc kubenswrapper[4970]: E0930 09:47:58.668819 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:47:58 crc kubenswrapper[4970]: E0930 09:47:58.668935 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.739809 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.739908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.739934 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.739968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.740036 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.842696 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.842759 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.842771 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.842786 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.842796 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.945382 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.945470 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.945496 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.945530 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:58 crc kubenswrapper[4970]: I0930 09:47:58.945556 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:58Z","lastTransitionTime":"2025-09-30T09:47:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.048626 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.048656 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.048664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.048677 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.048685 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.151392 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.151445 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.151464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.151488 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.151506 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.254179 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.254253 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.254277 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.254306 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.254329 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.356908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.357035 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.357060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.357090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.357112 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.459639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.459691 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.459701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.459717 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.459730 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.561970 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.562055 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.562077 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.562101 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.562118 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.664473 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.664541 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.664561 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.664588 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.664616 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.668022 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:47:59 crc kubenswrapper[4970]: E0930 09:47:59.668482 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.767321 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.767361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.767372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.767385 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.767395 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.869484 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.869622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.869642 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.869664 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.869701 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.972036 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.972082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.972095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.972115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:47:59 crc kubenswrapper[4970]: I0930 09:47:59.972130 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:47:59Z","lastTransitionTime":"2025-09-30T09:47:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.074556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.074613 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.074624 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.074639 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.074650 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.177340 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.177387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.177397 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.177412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.177421 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.280413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.280454 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.280462 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.280515 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.280525 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.383103 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.383150 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.383166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.383189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.383206 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.486703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.486772 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.486790 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.486814 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.486831 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.589108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.589208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.589229 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.589253 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.589309 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.667650 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.667738 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.667650 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:00 crc kubenswrapper[4970]: E0930 09:48:00.667849 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:00 crc kubenswrapper[4970]: E0930 09:48:00.667959 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:00 crc kubenswrapper[4970]: E0930 09:48:00.668218 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.692493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.692571 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.692589 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.692614 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.692635 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.796043 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.796208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.796236 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.796269 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.796291 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.899450 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.899506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.899522 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.899545 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:00 crc kubenswrapper[4970]: I0930 09:48:00.899563 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:00Z","lastTransitionTime":"2025-09-30T09:48:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.002770 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.002823 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.002845 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.002865 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.002878 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.105781 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.105868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.105909 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.105932 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.105947 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.208731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.208763 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.208771 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.208783 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.208792 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.312049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.312125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.312137 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.312159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.312172 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.415967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.416069 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.416093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.416125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.416146 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.519607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.519690 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.519706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.519731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.519748 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.623863 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.623944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.623966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.624037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.624061 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.668307 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:01 crc kubenswrapper[4970]: E0930 09:48:01.668569 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.727098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.727144 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.727157 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.727194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.727204 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.829570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.829623 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.829633 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.829649 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.829660 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.932698 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.932733 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.932746 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.932761 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:01 crc kubenswrapper[4970]: I0930 09:48:01.932774 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:01Z","lastTransitionTime":"2025-09-30T09:48:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.035828 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.035871 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.035879 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.035894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.035904 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.139932 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.140134 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.140198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.140231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.140296 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.244032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.244108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.244119 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.244165 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.244181 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.348037 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.348083 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.348095 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.348119 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.348137 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.450323 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.450404 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.450425 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.450459 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.450484 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.553663 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.553714 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.553729 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.553753 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.553771 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.657259 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.657318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.657334 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.657357 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.657375 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.667501 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:02 crc kubenswrapper[4970]: E0930 09:48:02.667672 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.667717 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.667803 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:02 crc kubenswrapper[4970]: E0930 09:48:02.667930 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:02 crc kubenswrapper[4970]: E0930 09:48:02.668026 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.760417 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.760464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.760478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.760500 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.760515 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.862977 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.863066 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.863102 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.863122 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.863135 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.971194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.971258 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.971276 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.971305 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:02 crc kubenswrapper[4970]: I0930 09:48:02.971343 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:02Z","lastTransitionTime":"2025-09-30T09:48:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.074310 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.074354 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.074368 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.074387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.074401 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.176916 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.176955 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.176964 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.176979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.177027 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.279631 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.279701 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.279718 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.279752 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.279788 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.382492 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.382549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.382560 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.382583 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.382596 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.486501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.486584 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.486607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.486636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.486655 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.590081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.590170 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.590184 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.590210 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.590231 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.668028 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:03 crc kubenswrapper[4970]: E0930 09:48:03.668253 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.693606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.693657 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.693680 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.693703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.693720 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.796200 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.796262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.796278 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.796298 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.796315 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.898929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.898992 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.899035 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.899089 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:03 crc kubenswrapper[4970]: I0930 09:48:03.899105 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:03Z","lastTransitionTime":"2025-09-30T09:48:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.001599 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.001678 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.001692 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.001746 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.001765 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.105549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.105648 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.105666 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.105690 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.105743 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.208159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.208208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.208221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.208238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.208253 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.311092 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.311147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.311169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.311189 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.311203 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.415471 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.415542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.415565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.415595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.415618 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.519170 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.519231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.519249 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.519275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.519294 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.623382 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.623473 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.623490 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.623560 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.623585 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.668353 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.668392 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.668366 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:04 crc kubenswrapper[4970]: E0930 09:48:04.668545 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:04 crc kubenswrapper[4970]: E0930 09:48:04.668735 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:04 crc kubenswrapper[4970]: E0930 09:48:04.668916 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.726983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.727068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.727086 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.727112 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.727129 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.829844 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.829916 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.829926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.829943 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.829956 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.934132 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.934206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.934233 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.934262 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:04 crc kubenswrapper[4970]: I0930 09:48:04.934283 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:04Z","lastTransitionTime":"2025-09-30T09:48:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.036694 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.036743 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.036754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.036770 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.036978 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.141040 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.141199 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.141224 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.141297 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.141320 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.244306 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.244361 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.244383 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.244410 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.244434 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.347797 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.347876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.347895 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.347921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.347938 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.450439 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.450519 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.450549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.450573 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.450592 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.553030 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.553068 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.553076 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.553090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.553101 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.656041 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.656084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.656096 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.656113 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.656124 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.667806 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:05 crc kubenswrapper[4970]: E0930 09:48:05.668041 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.758540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.758615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.758638 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.758668 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.758690 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.862253 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.862760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.862948 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.863159 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.863327 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.966688 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.966747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.966764 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.966788 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:05 crc kubenswrapper[4970]: I0930 09:48:05.966806 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:05Z","lastTransitionTime":"2025-09-30T09:48:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.069574 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.069641 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.069658 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.069684 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.069702 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.173728 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.173775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.173791 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.173813 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.173831 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.276843 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.276911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.276929 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.276956 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.276973 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.373896 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.373966 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.374028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.374062 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.374082 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.396436 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:06Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.401872 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.402140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.402301 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.402464 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.402597 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.424542 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:06Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.430354 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.430402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.430413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.430432 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.430448 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.449257 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:06Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.453452 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.453525 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.453544 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.453572 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.453590 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.476712 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:06Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.482194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.482423 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.482514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.482615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.482719 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.503282 4970 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-09-30T09:48:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"686f6a6c-33dd-428d-95f2-c1d9edb8dca6\\\",\\\"systemUUID\\\":\\\"2e6e3b7a-0e45-4517-abb1-931732be7041\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:06Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.503792 4970 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.506133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.506200 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.506217 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.506244 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.506261 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.608480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.608543 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.608562 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.608587 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.608605 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.668557 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.668606 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.669160 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.669396 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.669537 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:06 crc kubenswrapper[4970]: E0930 09:48:06.669657 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.712034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.712501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.712740 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.712938 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.713145 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.816152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.816219 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.816237 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.816266 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.816284 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.919501 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.919564 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.919582 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.919607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:06 crc kubenswrapper[4970]: I0930 09:48:06.919629 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:06Z","lastTransitionTime":"2025-09-30T09:48:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.021533 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.021570 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.021578 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.021592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.021601 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.125324 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.125387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.125405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.125430 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.125456 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.228238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.228333 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.228344 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.228359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.228368 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.332204 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.332269 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.332285 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.332311 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.332328 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.435921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.436032 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.436054 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.436081 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.436099 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.514280 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:07 crc kubenswrapper[4970]: E0930 09:48:07.514458 4970 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:48:07 crc kubenswrapper[4970]: E0930 09:48:07.514538 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs podName:8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c nodeName:}" failed. No retries permitted until 2025-09-30 09:49:11.514516676 +0000 UTC m=+164.586367640 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs") pod "network-metrics-daemon-sgksk" (UID: "8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c") : object "openshift-multus"/"metrics-daemon-secret" not registered Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.539594 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.539651 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.539668 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.539697 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.539766 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.642724 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.643280 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.643431 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.643581 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.643724 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.668539 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:07 crc kubenswrapper[4970]: E0930 09:48:07.668920 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.690968 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd7deb36-3bd1-466e-ae66-7a1f47354182\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ddcbbace338dcb5d1392bca828e7dd935294445b6c9d9f312732a4a27088f689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d2224f867e414d218d138486cfd7c8900b18808143970fbe2e172949cf2274d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ab3eb483e020f90d6d692823fa0edeead074c6fe36abc49663505ded6caa703\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae571e2b231d63d708e4917ddae4eeb5260aaba11624180efc34573fb59bbe57\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.711435 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f2f40183a74f89d9cb462c1698b42b3691802e7e1f1195815f4a1cb700a03ab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.732077 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ebd51e41015cfc75d6692476e550da382ad9593a4ee17f8a2d5d0230ba39c50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.748448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.748500 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.748518 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.748547 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.748566 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.749651 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.766144 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"92198682-93fe-4b8a-8b03-bb768b56a129\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce49fa71b9a8ddc335f8a29632bc2442c004ee636d15c868cb2048dd50639205\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bft59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gcphg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.783051 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-4t4nh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2eedcbd8-2867-4c1d-8d23-25e76843cca8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3011f2b4d249feccd3a065eedc870976ec230c5eb8a081abbcb32d918c0d9399\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qpkn7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:52Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-4t4nh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.804753 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wdlzl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adc4e528-ad76-4673-925a-f4f932e1ac51\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:37Z\\\",\\\"message\\\":\\\"2025-09-30T09:46:52+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6\\\\n2025-09-30T09:46:52+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_981b423f-2f08-42a1-b863-7bb787d3c0a6 to /host/opt/cni/bin/\\\\n2025-09-30T09:46:52Z [verbose] multus-daemon started\\\\n2025-09-30T09:46:52Z [verbose] Readiness Indicator file check\\\\n2025-09-30T09:47:37Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dtjh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wdlzl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.861128 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.861184 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.861201 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.861226 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.861245 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.873159 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9687ea64-3693-468d-9fde-2059deb10338\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-09-30T09:47:41Z\\\",\\\"message\\\":\\\"generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0930 09:47:41.545127 6982 services_controller.go:453] Built service openshift-dns/dns-default template LB for network=default: []services.LB{}\\\\nI0930 09:47:41.545133 6982 services_controller.go:360] Finished syncing service etcd on namespace openshift-etcd for network=default : 2.545865ms\\\\nI0930 09:47:41.545140 6982 services_controller.go:454] Service openshift-dns/dns-default for network=default has 0 cluster-wide, 3 per-node configs, 0 template configs, making 0 (cluster) 2 (per node) and 0 (template) load balancers\\\\nF0930 09:47:41.545133 6982 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:47:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2qpww\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-frblw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.892642 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fa646711-828f-41f6-bcaf-ffbe1917a50d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fcb713a1f422bed1215b09d05f3bc528355393af9e0aa765d93d50ecd4d044aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3d3823b881363d6c4a81ba825009df310bc0f7e4d06086a4cba73c63ce51bc1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.907077 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3db543cf-a9cd-47de-991d-a5b23b9ddd1d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e9e0fd7fc813018a8a25a39e5ff74aed3f62a3d042585c3a0a57b0f6ab3cf419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b59af43176d92e001b86428633da1562cc8024c17ee9074cb83388225e88580e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://62305f0cf50f9a932162b64d5e7008d75755ec66b58b960450b24ec378a27e2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cab877c815285d7db07447a1fefb29c2174bf9893c3e1d434fa6098a0bfea15f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6203d9a46a2d14eca3c9f9544b0dddfdf7059aeec71f5dc6132ef062615bb1e1\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"le observer\\\\nW0930 09:46:47.902690 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0930 09:46:47.903385 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0930 09:46:47.904500 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3188710699/tls.crt::/tmp/serving-cert-3188710699/tls.key\\\\\\\"\\\\nI0930 09:46:48.097372 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0930 09:46:48.099820 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0930 09:46:48.099837 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0930 09:46:48.099871 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0930 09:46:48.099876 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0930 09:46:48.108585 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0930 09:46:48.108609 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108613 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0930 09:46:48.108618 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0930 09:46:48.108621 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0930 09:46:48.108625 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0930 09:46:48.108628 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0930 09:46:48.108774 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0930 09:46:48.110656 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:42Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c09a3c6af72d3a445811597a724b957fe6330d85b10373fbe7cdef685bb937c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e8099102ce1191f176da0aac9438f4322b92bb3f5e3890c1a545a70f9c2880db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.919791 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76c381c9-b49e-4a91-a2f9-623f77db9c65\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1889d9f91438a275e1ff6fc0adc74a42e47bcab961f06b96fd06acdf63819fea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://266e8cda32e3cc8777714f25448405d0487add6c95f695e43ee9f7674ea1287d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://72de87daf9a24b2d7b7f2a45bce5ca85e4476ed82f44a39639554aad8e4b328c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91700771fc5360efab615dc1e170abbe4a5868479a25f11922639aace23a5db9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:27Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.933094 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.943767 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-zd52c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6f1f7645-8157-4743-a7cc-0083a3269987\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba6ce7b502c705c7fee5c69e254e88beddea87261b5683a362d43da0b3c905d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6nm5x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zd52c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.960688 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-d6567" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"69776d3e-4ddb-484b-86dd-930de13b3523\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a4fd454c846162dcaf3d88d2b029049fbe76dcb136f72e6e45e6191153cbc58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://816370c8013454eec1c0ba0a7ff5b9021ada0bee57791e0a3ccd4e6ca1971e95\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://30cbe3b4631afd9f6b4e029e76ddf2a738a2965b6d1fc8abd974dbe9540134d6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:51Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4b5fa95762b6f8a33d736b9422dfedd3ba58c616bc87b15cc422518d1caadf50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:52Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://caf0b8557104d7de785dccd06b73e2b0f02b660fae7089cc7bc12d957007f2b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:53Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e27fa7258908b7192ff5147accdbc7c4e6945e0552cb9524e71186a79dc436b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ee5409dd2c8a5a48af0c113564bb5df5357e4eda3ffb360228743f0d1c81c9f4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-09-30T09:46:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-09-30T09:46:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7vjk6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:46:49Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-d6567\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.963891 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.963954 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.963967 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.963985 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.964018 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:07Z","lastTransitionTime":"2025-09-30T09:48:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.978843 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"eba65cf9-c598-46fa-a09b-23cb90b9575f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://87ce2ccac1936de522a4aa8502dfe17b87f6dd92197c0699654a9b69ba64cabb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a80bef1f17f4ecdac23ba917d4ab15ab6b4f41adcc3df62bb39e0289ac3924ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:47:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l68gs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:01Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-6sjvb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:07 crc kubenswrapper[4970]: I0930 09:48:07.993031 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-sgksk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-30T09:47:03Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jjxpn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-09-30T09:47:03Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-sgksk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:07Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.010289 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:48Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:08Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.026362 4970 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-09-30T09:46:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://105c265d6346a47efb6eb43579c7dbeed3ff960218d9eebe3b7164330d3f5149\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbad526d571ccd993a4c2c0ce5a8701119cfa674be90bce3979ba56daad62dfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-09-30T09:46:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-09-30T09:48:08Z is after 2025-08-24T17:21:41Z" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.067063 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.067154 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.067208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.067232 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.067249 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.171089 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.171169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.171188 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.171207 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.171218 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.274410 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.274482 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.274506 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.274539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.274562 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.378148 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.378194 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.378209 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.378231 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.378247 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.480952 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.481108 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.481173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.481208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.481238 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.584273 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.584328 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.584345 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.584367 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.584383 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.667471 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.667487 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.667487 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:08 crc kubenswrapper[4970]: E0930 09:48:08.667667 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:08 crc kubenswrapper[4970]: E0930 09:48:08.668069 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:08 crc kubenswrapper[4970]: E0930 09:48:08.667914 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.688310 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.688352 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.688363 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.688382 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.688394 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.791271 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.791342 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.791359 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.791382 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.791400 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.894497 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.894784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.894800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.894817 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.894833 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.997984 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.998098 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.998115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.998539 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:08 crc kubenswrapper[4970]: I0930 09:48:08.998589 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:08Z","lastTransitionTime":"2025-09-30T09:48:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.101448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.101511 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.101527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.101549 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.101566 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.211405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.211478 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.211489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.211516 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.211531 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.314849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.314926 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.314942 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.314968 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.315021 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.417137 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.417173 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.417182 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.417196 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.417204 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.519364 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.519426 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.519450 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.519484 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.519500 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.622723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.622767 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.622800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.622835 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.622847 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.668205 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:09 crc kubenswrapper[4970]: E0930 09:48:09.668736 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.682689 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.726275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.726326 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.726341 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.726363 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.726381 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.829425 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.829516 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.829538 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.829558 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.829573 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.932935 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.933028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.933052 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.933082 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:09 crc kubenswrapper[4970]: I0930 09:48:09.933104 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:09Z","lastTransitionTime":"2025-09-30T09:48:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.036203 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.036731 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.036754 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.036785 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.036810 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.139695 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.139775 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.139800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.139831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.139854 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.243518 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.243577 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.243595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.243619 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.243636 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.347133 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.347206 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.347228 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.347256 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.347291 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.450944 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.451067 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.451094 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.451125 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.451146 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.554147 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.554205 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.554221 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.554244 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.554261 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.656921 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.657021 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.657040 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.657065 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.657084 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.667469 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.667495 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.667497 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:10 crc kubenswrapper[4970]: E0930 09:48:10.667803 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:10 crc kubenswrapper[4970]: E0930 09:48:10.668034 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:10 crc kubenswrapper[4970]: E0930 09:48:10.668183 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.760164 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.760234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.760256 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.760286 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.760309 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.864166 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.864258 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.864275 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.864298 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.864316 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.967852 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.967923 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.967979 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.968041 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:10 crc kubenswrapper[4970]: I0930 09:48:10.968062 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:10Z","lastTransitionTime":"2025-09-30T09:48:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.070532 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.070606 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.070634 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.070663 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.070685 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.173575 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.173647 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.173671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.173706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.173728 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.277534 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.277595 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.277615 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.277636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.277651 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.379912 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.379983 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.380059 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.380088 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.380109 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.484636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.484713 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.484736 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.484764 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.484787 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.588146 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.588234 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.588259 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.588288 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.588312 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.669099 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:11 crc kubenswrapper[4970]: E0930 09:48:11.669332 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.669524 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:48:11 crc kubenswrapper[4970]: E0930 09:48:11.669857 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-frblw_openshift-ovn-kubernetes(9687ea64-3693-468d-9fde-2059deb10338)\"" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.691703 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.691760 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.691776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.691800 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.691818 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.794920 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.795014 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.795034 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.795060 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.795080 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.898527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.898592 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.898610 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.898636 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:11 crc kubenswrapper[4970]: I0930 09:48:11.898658 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:11Z","lastTransitionTime":"2025-09-30T09:48:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.000981 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.001074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.001091 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.001115 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.001134 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.104347 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.104417 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.104444 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.104468 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.104481 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.207640 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.207709 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.207730 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.207756 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.207773 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.310523 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.310607 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.310635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.310670 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.310694 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.413318 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.413379 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.413397 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.413424 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.413442 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.516346 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.516384 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.516393 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.516406 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.516414 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.619587 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.619633 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.619644 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.619662 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.619676 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.668290 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.668374 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.668290 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:12 crc kubenswrapper[4970]: E0930 09:48:12.668466 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:12 crc kubenswrapper[4970]: E0930 09:48:12.668523 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:12 crc kubenswrapper[4970]: E0930 09:48:12.668580 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.722028 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.722072 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.722084 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.722107 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.722120 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.826351 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.826463 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.826489 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.826567 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.826597 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.929742 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.929822 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.929841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.929904 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:12 crc kubenswrapper[4970]: I0930 09:48:12.929921 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:12Z","lastTransitionTime":"2025-09-30T09:48:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.033963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.034035 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.034049 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.034073 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.034088 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.137596 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.137671 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.137693 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.137720 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.137741 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.241449 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.241513 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.241527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.241552 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.241568 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.344725 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.344788 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.344810 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.344839 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.344859 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.448434 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.448508 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.448527 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.448556 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.448579 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.553029 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.553101 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.553140 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.553175 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.553194 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.656090 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.656141 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.656153 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.656178 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.656191 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.667767 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:13 crc kubenswrapper[4970]: E0930 09:48:13.668040 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.759046 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.759124 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.759143 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.759171 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.759189 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.863330 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.863390 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.863402 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.863534 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.863592 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.967372 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.967448 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.967474 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.967514 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:13 crc kubenswrapper[4970]: I0930 09:48:13.967540 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:13Z","lastTransitionTime":"2025-09-30T09:48:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.070458 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.070561 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.070586 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.070622 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.070647 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.175784 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.175834 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.175850 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.175868 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.175880 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.279023 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.279066 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.279077 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.279093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.279104 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.383412 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.383517 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.383548 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.383589 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.383613 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.487655 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.487706 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.487723 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.487747 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.487764 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.590841 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.590894 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.590905 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.590924 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.590936 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.667369 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:14 crc kubenswrapper[4970]: E0930 09:48:14.667534 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.667939 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.668029 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:14 crc kubenswrapper[4970]: E0930 09:48:14.668401 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:14 crc kubenswrapper[4970]: E0930 09:48:14.668461 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.695066 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.695327 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.695345 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.695373 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.695394 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.798831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.798908 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.798931 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.798963 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.798983 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.902002 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.902048 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.902058 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.902074 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:14 crc kubenswrapper[4970]: I0930 09:48:14.902085 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:14Z","lastTransitionTime":"2025-09-30T09:48:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.005143 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.005198 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.005216 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.005238 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.005255 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.108819 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.108881 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.108901 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.108925 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.108945 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.212460 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.212523 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.212540 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.212565 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.212583 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.315178 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.315350 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.315373 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.315480 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.315545 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.419350 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.419430 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.419465 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.419485 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.419499 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.523493 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.523567 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.523590 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.523621 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.523640 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.626847 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.626927 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.626951 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.626982 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.627043 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.667899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:15 crc kubenswrapper[4970]: E0930 09:48:15.668157 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.731433 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.731542 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.731569 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.731609 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.731637 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.835093 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.835169 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.835183 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.835208 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.835224 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.938886 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.938949 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.938961 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.939020 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:15 crc kubenswrapper[4970]: I0930 09:48:15.939034 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:15Z","lastTransitionTime":"2025-09-30T09:48:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.041911 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.042321 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.042405 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.042531 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.042603 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.145789 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.145860 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.145876 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.145910 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.145929 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.249776 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.249831 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.249849 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.249873 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.249891 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.353295 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.353387 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.353408 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.353436 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.353455 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.456856 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.456932 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.457152 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.457186 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.457204 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.559315 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.559375 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.559390 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.559413 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.559429 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.606568 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.606635 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.606652 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.606677 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.606694 4970 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-30T09:48:16Z","lastTransitionTime":"2025-09-30T09:48:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.659929 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6"] Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.661141 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.663342 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.663463 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.665134 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.665140 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.667469 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.667536 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:16 crc kubenswrapper[4970]: E0930 09:48:16.667605 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:16 crc kubenswrapper[4970]: E0930 09:48:16.667824 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.668023 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:16 crc kubenswrapper[4970]: E0930 09:48:16.668101 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.746920 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=7.746891455 podStartE2EDuration="7.746891455s" podCreationTimestamp="2025-09-30 09:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.746846344 +0000 UTC m=+109.818697278" watchObservedRunningTime="2025-09-30 09:48:16.746891455 +0000 UTC m=+109.818742389" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.778348 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=82.77831632 podStartE2EDuration="1m22.77831632s" podCreationTimestamp="2025-09-30 09:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.76502313 +0000 UTC m=+109.836874064" watchObservedRunningTime="2025-09-30 09:48:16.77831632 +0000 UTC m=+109.850167294" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.779051 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-4t4nh" podStartSLOduration=89.779042729 podStartE2EDuration="1m29.779042729s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.778138556 +0000 UTC m=+109.849989490" watchObservedRunningTime="2025-09-30 09:48:16.779042729 +0000 UTC m=+109.850893673" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.814116 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podStartSLOduration=89.814068446 podStartE2EDuration="1m29.814068446s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.813687096 +0000 UTC m=+109.885538040" watchObservedRunningTime="2025-09-30 09:48:16.814068446 +0000 UTC m=+109.885919420" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.821331 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8010e495-0a48-44b6-aa2d-6869386f7333-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.821440 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8010e495-0a48-44b6-aa2d-6869386f7333-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.821481 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8010e495-0a48-44b6-aa2d-6869386f7333-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.821514 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8010e495-0a48-44b6-aa2d-6869386f7333-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.821557 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8010e495-0a48-44b6-aa2d-6869386f7333-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.844364 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.844345082 podStartE2EDuration="58.844345082s" podCreationTimestamp="2025-09-30 09:47:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.82982178 +0000 UTC m=+109.901672724" watchObservedRunningTime="2025-09-30 09:48:16.844345082 +0000 UTC m=+109.916196036" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.858760 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zd52c" podStartSLOduration=89.858735731 podStartE2EDuration="1m29.858735731s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.858549126 +0000 UTC m=+109.930400060" watchObservedRunningTime="2025-09-30 09:48:16.858735731 +0000 UTC m=+109.930586685" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.886019 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-d6567" podStartSLOduration=88.885981699 podStartE2EDuration="1m28.885981699s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.885607789 +0000 UTC m=+109.957458733" watchObservedRunningTime="2025-09-30 09:48:16.885981699 +0000 UTC m=+109.957832633" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.903031 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wdlzl" podStartSLOduration=88.903005475 podStartE2EDuration="1m28.903005475s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.902490612 +0000 UTC m=+109.974341556" watchObservedRunningTime="2025-09-30 09:48:16.903005475 +0000 UTC m=+109.974856409" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923076 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8010e495-0a48-44b6-aa2d-6869386f7333-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8010e495-0a48-44b6-aa2d-6869386f7333-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923176 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8010e495-0a48-44b6-aa2d-6869386f7333-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923198 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8010e495-0a48-44b6-aa2d-6869386f7333-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923225 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8010e495-0a48-44b6-aa2d-6869386f7333-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923792 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8010e495-0a48-44b6-aa2d-6869386f7333-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.923843 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8010e495-0a48-44b6-aa2d-6869386f7333-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.924188 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8010e495-0a48-44b6-aa2d-6869386f7333-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.936104 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8010e495-0a48-44b6-aa2d-6869386f7333-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.943118 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=42.943101222 podStartE2EDuration="42.943101222s" podCreationTimestamp="2025-09-30 09:47:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.941778008 +0000 UTC m=+110.013628962" watchObservedRunningTime="2025-09-30 09:48:16.943101222 +0000 UTC m=+110.014952156" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.952406 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8010e495-0a48-44b6-aa2d-6869386f7333-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8cwf6\" (UID: \"8010e495-0a48-44b6-aa2d-6869386f7333\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.970393 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.9703574 podStartE2EDuration="1m28.9703574s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.970250688 +0000 UTC m=+110.042101622" watchObservedRunningTime="2025-09-30 09:48:16.9703574 +0000 UTC m=+110.042208334" Sep 30 09:48:16 crc kubenswrapper[4970]: I0930 09:48:16.975270 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" Sep 30 09:48:17 crc kubenswrapper[4970]: I0930 09:48:17.016727 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-6sjvb" podStartSLOduration=89.016706388 podStartE2EDuration="1m29.016706388s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:16.999405965 +0000 UTC m=+110.071256909" watchObservedRunningTime="2025-09-30 09:48:17.016706388 +0000 UTC m=+110.088557342" Sep 30 09:48:17 crc kubenswrapper[4970]: I0930 09:48:17.512463 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" event={"ID":"8010e495-0a48-44b6-aa2d-6869386f7333","Type":"ContainerStarted","Data":"1f292a12886a90ab612543b526c4f71520001d307fc83f9f9148c1333d19906d"} Sep 30 09:48:17 crc kubenswrapper[4970]: I0930 09:48:17.513030 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" event={"ID":"8010e495-0a48-44b6-aa2d-6869386f7333","Type":"ContainerStarted","Data":"e494517855c54b20afb0b83765feb5878c4684a26a7aa8058763a3182574a2a1"} Sep 30 09:48:17 crc kubenswrapper[4970]: I0930 09:48:17.536622 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8cwf6" podStartSLOduration=89.536600408 podStartE2EDuration="1m29.536600408s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:17.53512931 +0000 UTC m=+110.606980284" watchObservedRunningTime="2025-09-30 09:48:17.536600408 +0000 UTC m=+110.608451352" Sep 30 09:48:17 crc kubenswrapper[4970]: I0930 09:48:17.667901 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:17 crc kubenswrapper[4970]: E0930 09:48:17.670044 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:18 crc kubenswrapper[4970]: I0930 09:48:18.668657 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:18 crc kubenswrapper[4970]: I0930 09:48:18.668782 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:18 crc kubenswrapper[4970]: E0930 09:48:18.668934 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:18 crc kubenswrapper[4970]: I0930 09:48:18.669078 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:18 crc kubenswrapper[4970]: E0930 09:48:18.669180 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:18 crc kubenswrapper[4970]: E0930 09:48:18.669335 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:19 crc kubenswrapper[4970]: I0930 09:48:19.668288 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:19 crc kubenswrapper[4970]: E0930 09:48:19.668755 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:20 crc kubenswrapper[4970]: I0930 09:48:20.668167 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:20 crc kubenswrapper[4970]: E0930 09:48:20.668320 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:20 crc kubenswrapper[4970]: I0930 09:48:20.668167 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:20 crc kubenswrapper[4970]: E0930 09:48:20.668420 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:20 crc kubenswrapper[4970]: I0930 09:48:20.668167 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:20 crc kubenswrapper[4970]: E0930 09:48:20.668660 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:21 crc kubenswrapper[4970]: I0930 09:48:21.667815 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:21 crc kubenswrapper[4970]: E0930 09:48:21.667960 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:22 crc kubenswrapper[4970]: I0930 09:48:22.667766 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:22 crc kubenswrapper[4970]: E0930 09:48:22.668172 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:22 crc kubenswrapper[4970]: I0930 09:48:22.667807 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:22 crc kubenswrapper[4970]: E0930 09:48:22.668794 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:22 crc kubenswrapper[4970]: I0930 09:48:22.667809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:22 crc kubenswrapper[4970]: E0930 09:48:22.669083 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:23 crc kubenswrapper[4970]: I0930 09:48:23.668592 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:23 crc kubenswrapper[4970]: E0930 09:48:23.668977 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.544200 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/1.log" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.544879 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/0.log" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.544962 4970 generic.go:334] "Generic (PLEG): container finished" podID="adc4e528-ad76-4673-925a-f4f932e1ac51" containerID="74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824" exitCode=1 Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.545068 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerDied","Data":"74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824"} Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.545161 4970 scope.go:117] "RemoveContainer" containerID="6f3d34a4e4f52107faa3020df2e7587153e83060a8536e42813157fefd2f8430" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.545668 4970 scope.go:117] "RemoveContainer" containerID="74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824" Sep 30 09:48:24 crc kubenswrapper[4970]: E0930 09:48:24.545878 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wdlzl_openshift-multus(adc4e528-ad76-4673-925a-f4f932e1ac51)\"" pod="openshift-multus/multus-wdlzl" podUID="adc4e528-ad76-4673-925a-f4f932e1ac51" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.668109 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.668158 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.668120 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:24 crc kubenswrapper[4970]: E0930 09:48:24.668315 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:24 crc kubenswrapper[4970]: E0930 09:48:24.668400 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:24 crc kubenswrapper[4970]: E0930 09:48:24.668931 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:24 crc kubenswrapper[4970]: I0930 09:48:24.669229 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.550026 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/1.log" Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.551807 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/3.log" Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.553835 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerStarted","Data":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.554309 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.588168 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podStartSLOduration=97.588150055 podStartE2EDuration="1m37.588150055s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:25.587263272 +0000 UTC m=+118.659114206" watchObservedRunningTime="2025-09-30 09:48:25.588150055 +0000 UTC m=+118.660000989" Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.667914 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:25 crc kubenswrapper[4970]: E0930 09:48:25.668306 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:25 crc kubenswrapper[4970]: I0930 09:48:25.866730 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sgksk"] Sep 30 09:48:26 crc kubenswrapper[4970]: I0930 09:48:26.556845 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:26 crc kubenswrapper[4970]: E0930 09:48:26.557050 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:26 crc kubenswrapper[4970]: I0930 09:48:26.668201 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:26 crc kubenswrapper[4970]: E0930 09:48:26.668350 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:26 crc kubenswrapper[4970]: I0930 09:48:26.668420 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:26 crc kubenswrapper[4970]: I0930 09:48:26.668453 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:26 crc kubenswrapper[4970]: E0930 09:48:26.668899 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:26 crc kubenswrapper[4970]: E0930 09:48:26.668954 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:27 crc kubenswrapper[4970]: E0930 09:48:27.702357 4970 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Sep 30 09:48:27 crc kubenswrapper[4970]: E0930 09:48:27.780785 4970 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 09:48:28 crc kubenswrapper[4970]: I0930 09:48:28.667556 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:28 crc kubenswrapper[4970]: I0930 09:48:28.667564 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:28 crc kubenswrapper[4970]: E0930 09:48:28.668026 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:28 crc kubenswrapper[4970]: I0930 09:48:28.667597 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:28 crc kubenswrapper[4970]: E0930 09:48:28.668114 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:28 crc kubenswrapper[4970]: I0930 09:48:28.667566 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:28 crc kubenswrapper[4970]: E0930 09:48:28.667931 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:28 crc kubenswrapper[4970]: E0930 09:48:28.668157 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:30 crc kubenswrapper[4970]: I0930 09:48:30.667866 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:30 crc kubenswrapper[4970]: I0930 09:48:30.667895 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:30 crc kubenswrapper[4970]: I0930 09:48:30.667866 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:30 crc kubenswrapper[4970]: E0930 09:48:30.668079 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:30 crc kubenswrapper[4970]: E0930 09:48:30.668085 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:30 crc kubenswrapper[4970]: E0930 09:48:30.668259 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:30 crc kubenswrapper[4970]: I0930 09:48:30.669627 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:30 crc kubenswrapper[4970]: E0930 09:48:30.669952 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:32 crc kubenswrapper[4970]: I0930 09:48:32.667442 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:32 crc kubenswrapper[4970]: I0930 09:48:32.667455 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:32 crc kubenswrapper[4970]: E0930 09:48:32.667591 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:32 crc kubenswrapper[4970]: I0930 09:48:32.667474 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:32 crc kubenswrapper[4970]: I0930 09:48:32.667476 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:32 crc kubenswrapper[4970]: E0930 09:48:32.667685 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:32 crc kubenswrapper[4970]: E0930 09:48:32.667769 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:32 crc kubenswrapper[4970]: E0930 09:48:32.667843 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:32 crc kubenswrapper[4970]: E0930 09:48:32.781838 4970 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 09:48:33 crc kubenswrapper[4970]: I0930 09:48:33.622369 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:48:34 crc kubenswrapper[4970]: I0930 09:48:34.668075 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:34 crc kubenswrapper[4970]: E0930 09:48:34.668273 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:34 crc kubenswrapper[4970]: I0930 09:48:34.668541 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:34 crc kubenswrapper[4970]: E0930 09:48:34.668629 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:34 crc kubenswrapper[4970]: I0930 09:48:34.668813 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:34 crc kubenswrapper[4970]: E0930 09:48:34.668892 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:34 crc kubenswrapper[4970]: I0930 09:48:34.669112 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:34 crc kubenswrapper[4970]: E0930 09:48:34.669217 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:35 crc kubenswrapper[4970]: I0930 09:48:35.668819 4970 scope.go:117] "RemoveContainer" containerID="74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824" Sep 30 09:48:36 crc kubenswrapper[4970]: I0930 09:48:36.602407 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/1.log" Sep 30 09:48:36 crc kubenswrapper[4970]: I0930 09:48:36.602842 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerStarted","Data":"f9a542523ea7a51e1787f3a5415c9f5e402b4853c40b253cc053f7454fa09492"} Sep 30 09:48:36 crc kubenswrapper[4970]: I0930 09:48:36.668410 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:36 crc kubenswrapper[4970]: I0930 09:48:36.668439 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:36 crc kubenswrapper[4970]: I0930 09:48:36.668498 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:36 crc kubenswrapper[4970]: E0930 09:48:36.668578 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:36 crc kubenswrapper[4970]: E0930 09:48:36.668747 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:36 crc kubenswrapper[4970]: I0930 09:48:36.668807 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:36 crc kubenswrapper[4970]: E0930 09:48:36.668871 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:36 crc kubenswrapper[4970]: E0930 09:48:36.668935 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:37 crc kubenswrapper[4970]: E0930 09:48:37.782544 4970 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Sep 30 09:48:38 crc kubenswrapper[4970]: I0930 09:48:38.668352 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:38 crc kubenswrapper[4970]: I0930 09:48:38.668503 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:38 crc kubenswrapper[4970]: E0930 09:48:38.668571 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:38 crc kubenswrapper[4970]: I0930 09:48:38.668356 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:38 crc kubenswrapper[4970]: I0930 09:48:38.668391 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:38 crc kubenswrapper[4970]: E0930 09:48:38.668830 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:38 crc kubenswrapper[4970]: E0930 09:48:38.668901 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:38 crc kubenswrapper[4970]: E0930 09:48:38.669017 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:40 crc kubenswrapper[4970]: I0930 09:48:40.667764 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:40 crc kubenswrapper[4970]: I0930 09:48:40.667809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:40 crc kubenswrapper[4970]: I0930 09:48:40.667882 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:40 crc kubenswrapper[4970]: E0930 09:48:40.667939 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:40 crc kubenswrapper[4970]: I0930 09:48:40.667966 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:40 crc kubenswrapper[4970]: E0930 09:48:40.668135 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:40 crc kubenswrapper[4970]: E0930 09:48:40.668319 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:40 crc kubenswrapper[4970]: E0930 09:48:40.668469 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:42 crc kubenswrapper[4970]: I0930 09:48:42.668405 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:42 crc kubenswrapper[4970]: I0930 09:48:42.668529 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:42 crc kubenswrapper[4970]: E0930 09:48:42.668570 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Sep 30 09:48:42 crc kubenswrapper[4970]: E0930 09:48:42.668765 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-sgksk" podUID="8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c" Sep 30 09:48:42 crc kubenswrapper[4970]: I0930 09:48:42.668826 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:42 crc kubenswrapper[4970]: I0930 09:48:42.668865 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:42 crc kubenswrapper[4970]: E0930 09:48:42.668932 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Sep 30 09:48:42 crc kubenswrapper[4970]: E0930 09:48:42.669017 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.667753 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.667860 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.668040 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.668109 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.673149 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.673250 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.673306 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.673951 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.675363 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Sep 30 09:48:44 crc kubenswrapper[4970]: I0930 09:48:44.676365 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.005214 4970 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.061720 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k4ph9"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.064213 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6r7rb"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.064392 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.066037 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.069432 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tqp6b"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.070317 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.072519 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.073413 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.076168 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-f7zdf"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.077035 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.077703 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.079640 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.081254 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.081781 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.082130 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.082765 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.083364 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.083607 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.084156 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.084536 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.085735 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.086091 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gtdq5"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.086816 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.087787 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.096358 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.096716 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.098812 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.099571 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nmtqv"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.100186 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.100304 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.100195 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b78lb"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.100977 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.101419 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-c8bs2"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.101876 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.102250 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.117124 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.117156 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.121038 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.125634 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.141533 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.142158 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.142548 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.142581 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.142808 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.142853 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.142975 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143018 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143077 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143149 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143271 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143727 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143849 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.144029 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.144217 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.143151 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.145114 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.145293 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.145589 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.146377 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.146611 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.146784 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.146877 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.146955 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.147125 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.147293 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.147421 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.147545 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.147875 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.148324 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.150751 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.152184 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159135 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159388 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159500 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159595 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159719 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159854 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159947 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.160116 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.160382 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.160475 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.160682 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161098 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161307 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161453 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161495 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161625 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161815 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162016 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162192 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162236 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162322 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162340 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161513 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162405 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162491 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162577 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162639 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162655 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162726 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162761 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162806 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162839 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162497 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.162727 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163028 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163059 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163182 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163276 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.159903 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.161153 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163534 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163678 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.163687 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.164461 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.164943 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.168329 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.168357 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.170187 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.170671 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.171477 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vjppp"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.172364 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.173632 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.173761 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qmlst"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.198667 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.199239 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.199644 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.200927 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.203735 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.203837 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.203745 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.217387 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.217793 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.217868 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.217919 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-policies\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.217948 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f2267d30-75c6-4002-ae56-b623dc6d7e42-images\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.217975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-config\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218021 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqc9\" (UniqueName: \"kubernetes.io/projected/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-kube-api-access-7zqc9\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218050 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c9b08a-94e3-4606-8fbf-188005bbd87d-serving-cert\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218072 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08f9bfd0-2121-4159-b0aa-41f5cc539aae-serving-cert\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218095 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-etcd-client\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218117 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-serving-cert\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218138 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-audit-policies\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218161 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p2r4\" (UniqueName: \"kubernetes.io/projected/ea5f4522-63a6-4fdc-add5-e75832a54c98-kube-api-access-2p2r4\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218185 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02c9b08a-94e3-4606-8fbf-188005bbd87d-trusted-ca\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218209 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-config\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218232 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218252 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2267d30-75c6-4002-ae56-b623dc6d7e42-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218270 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f677e24a-6a65-4cc4-8653-6ef411944dfd-config\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218288 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218315 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218337 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f677e24a-6a65-4cc4-8653-6ef411944dfd-auth-proxy-config\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218357 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-serving-cert\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218375 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5f4522-63a6-4fdc-add5-e75832a54c98-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218402 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218428 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218453 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea5f4522-63a6-4fdc-add5-e75832a54c98-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-image-import-ca\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-serving-cert\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218529 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9dda51-d5d7-46e1-886d-865955b5bd39-serving-cert\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218550 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67zz8\" (UniqueName: \"kubernetes.io/projected/e2a8c87a-d4b3-443b-af72-412d3eb74754-kube-api-access-67zz8\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218572 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218595 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddxqx\" (UniqueName: \"kubernetes.io/projected/6f358f9a-4142-4a01-b23b-2c086a9a78fc-kube-api-access-ddxqx\") pod \"cluster-samples-operator-665b6dd947-6lmxk\" (UID: \"6f358f9a-4142-4a01-b23b-2c086a9a78fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218601 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218618 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-service-ca-bundle\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218646 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218670 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f358f9a-4142-4a01-b23b-2c086a9a78fc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6lmxk\" (UID: \"6f358f9a-4142-4a01-b23b-2c086a9a78fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218692 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58246f9-2537-42e2-af7b-8db153b987aa-audit-dir\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218723 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218748 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-serving-cert\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218770 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-dir\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218794 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j777q\" (UniqueName: \"kubernetes.io/projected/d58246f9-2537-42e2-af7b-8db153b987aa-kube-api-access-j777q\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218840 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-config\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218867 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218893 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-encryption-config\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-259wb\" (UniqueName: \"kubernetes.io/projected/d41d7513-cd63-4320-907a-51d6e48fa9e0-kube-api-access-259wb\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218941 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w9nz\" (UniqueName: \"kubernetes.io/projected/f2267d30-75c6-4002-ae56-b623dc6d7e42-kube-api-access-2w9nz\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.218965 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-oauth-config\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219042 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-config\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219079 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-client-ca\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219102 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a8c87a-d4b3-443b-af72-412d3eb74754-serving-cert\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219126 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f677e24a-6a65-4cc4-8653-6ef411944dfd-machine-approver-tls\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219152 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxdtk\" (UniqueName: \"kubernetes.io/projected/f677e24a-6a65-4cc4-8653-6ef411944dfd-kube-api-access-gxdtk\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219177 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219203 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219228 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-trusted-ca-bundle\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219251 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219274 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jh8bg\" (UniqueName: \"kubernetes.io/projected/cd9dda51-d5d7-46e1-886d-865955b5bd39-kube-api-access-jh8bg\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219299 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zbzf\" (UniqueName: \"kubernetes.io/projected/02c9b08a-94e3-4606-8fbf-188005bbd87d-kube-api-access-6zbzf\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219322 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219344 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-encryption-config\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219370 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-etcd-client\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219397 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c9b08a-94e3-4606-8fbf-188005bbd87d-config\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219420 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbdf\" (UniqueName: \"kubernetes.io/projected/200e46f5-be36-4a88-85d0-fb279eba20c5-kube-api-access-sbbdf\") pod \"downloads-7954f5f757-f7zdf\" (UID: \"200e46f5-be36-4a88-85d0-fb279eba20c5\") " pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219441 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/08f9bfd0-2121-4159-b0aa-41f5cc539aae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219482 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-audit\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219540 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d58246f9-2537-42e2-af7b-8db153b987aa-node-pullsecrets\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219588 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219675 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-client-ca\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219704 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78m26\" (UniqueName: \"kubernetes.io/projected/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-kube-api-access-78m26\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219721 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2267d30-75c6-4002-ae56-b623dc6d7e42-config\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219733 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219940 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.219741 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l24c\" (UniqueName: \"kubernetes.io/projected/08f9bfd0-2121-4159-b0aa-41f5cc539aae-kube-api-access-6l24c\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220086 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220115 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220131 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlvz5\" (UniqueName: \"kubernetes.io/projected/4eac6509-7889-4976-bcc4-bf65486c098f-kube-api-access-vlvz5\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220149 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220164 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-console-config\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-service-ca\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220191 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-oauth-serving-cert\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220206 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d41d7513-cd63-4320-907a-51d6e48fa9e0-audit-dir\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220338 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.220437 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.221373 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f4xmt"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.222183 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.222897 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.223052 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.223350 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.223586 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.224387 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.224850 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.227604 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.229679 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mqqn"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.230435 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.230793 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.231009 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.231869 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.232246 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.234626 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-27726"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.235379 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.235643 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.236069 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.236620 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.237269 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.237413 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.237764 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.239476 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-mnwkr"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.239857 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.241488 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.241756 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.241812 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.245308 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tp67v"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.245715 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.246852 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.246870 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.247414 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.247471 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.247776 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.252553 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cs6xx"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.252714 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.253475 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.253932 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.254565 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.254745 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.255046 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.255067 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.255092 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.255126 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.255132 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.255865 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6r7rb"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.257013 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f7zdf"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.257687 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.258665 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tqp6b"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.259736 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.260742 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.265256 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gtdq5"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.270771 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k4ph9"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.271590 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-svmb7"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.274151 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.277709 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.279255 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b78lb"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.281011 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qmlst"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.284201 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c8bs2"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.286039 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.287691 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.290320 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vjppp"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.291756 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.292852 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f4xmt"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.293923 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.295280 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.296851 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nmtqv"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.297228 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.298397 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mqqn"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.299703 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.301683 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.307194 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-svmb7"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.307350 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.308139 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.310901 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.313829 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.315312 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.317175 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.317452 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.319868 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cs6xx"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.320899 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67zz8\" (UniqueName: \"kubernetes.io/projected/e2a8c87a-d4b3-443b-af72-412d3eb74754-kube-api-access-67zz8\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.320971 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-service-ca-bundle\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321022 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321052 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntwdk\" (UniqueName: \"kubernetes.io/projected/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-kube-api-access-ntwdk\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321074 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321093 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-default-certificate\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321113 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j777q\" (UniqueName: \"kubernetes.io/projected/d58246f9-2537-42e2-af7b-8db153b987aa-kube-api-access-j777q\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321139 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c082f9-de6d-4efb-970d-8497d39ab890-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321158 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d8a711c-0969-4002-8b2f-84acf31ea060-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321177 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9jz7\" (UniqueName: \"kubernetes.io/projected/e0b59dab-c4d7-4baa-9811-f29d7b19be0b-kube-api-access-d9jz7\") pod \"control-plane-machine-set-operator-78cbb6b69f-65v4s\" (UID: \"e0b59dab-c4d7-4baa-9811-f29d7b19be0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321194 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-config\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-oauth-config\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-config\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321254 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f677e24a-6a65-4cc4-8653-6ef411944dfd-machine-approver-tls\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321271 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxdtk\" (UniqueName: \"kubernetes.io/projected/f677e24a-6a65-4cc4-8653-6ef411944dfd-kube-api-access-gxdtk\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321288 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrfh5\" (UniqueName: \"kubernetes.io/projected/14645e18-5ae5-40f7-b52f-591a49032bc0-kube-api-access-jrfh5\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321321 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-serving-cert\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321342 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-trusted-ca-bundle\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321380 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lzl7\" (UniqueName: \"kubernetes.io/projected/a6c612c8-92ec-4052-86ee-a1f340e70b04-kube-api-access-7lzl7\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321398 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c082f9-de6d-4efb-970d-8497d39ab890-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321417 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zbzf\" (UniqueName: \"kubernetes.io/projected/02c9b08a-94e3-4606-8fbf-188005bbd87d-kube-api-access-6zbzf\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321437 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321457 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c9b08a-94e3-4606-8fbf-188005bbd87d-config\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321475 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbdf\" (UniqueName: \"kubernetes.io/projected/200e46f5-be36-4a88-85d0-fb279eba20c5-kube-api-access-sbbdf\") pod \"downloads-7954f5f757-f7zdf\" (UID: \"200e46f5-be36-4a88-85d0-fb279eba20c5\") " pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-audit\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321518 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d8a711c-0969-4002-8b2f-84acf31ea060-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321542 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78m26\" (UniqueName: \"kubernetes.io/projected/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-kube-api-access-78m26\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321558 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2267d30-75c6-4002-ae56-b623dc6d7e42-config\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321575 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l24c\" (UniqueName: \"kubernetes.io/projected/08f9bfd0-2121-4159-b0aa-41f5cc539aae-kube-api-access-6l24c\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321592 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321608 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6c612c8-92ec-4052-86ee-a1f340e70b04-trusted-ca\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321626 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-stats-auth\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321644 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-srv-cert\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321661 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321679 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321695 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-console-config\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321714 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-service-ca\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321732 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-oauth-serving-cert\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321750 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8779368-6bce-4ab6-b3e0-3566175db496-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321769 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f2267d30-75c6-4002-ae56-b623dc6d7e42-images\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321787 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-config\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321806 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684028c5-aab7-4020-8ccd-b1f7b575f59d-config\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321823 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c9b08a-94e3-4606-8fbf-188005bbd87d-serving-cert\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321843 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p2r4\" (UniqueName: \"kubernetes.io/projected/ea5f4522-63a6-4fdc-add5-e75832a54c98-kube-api-access-2p2r4\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321861 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321878 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684028c5-aab7-4020-8ccd-b1f7b575f59d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321894 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c082f9-de6d-4efb-970d-8497d39ab890-config\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321911 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8779368-6bce-4ab6-b3e0-3566175db496-proxy-tls\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321929 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2267d30-75c6-4002-ae56-b623dc6d7e42-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f677e24a-6a65-4cc4-8653-6ef411944dfd-config\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321964 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hf9c\" (UniqueName: \"kubernetes.io/projected/3339365c-8f70-47e8-9cc4-51f20cf3068e-kube-api-access-9hf9c\") pod \"migrator-59844c95c7-fr5b7\" (UID: \"3339365c-8f70-47e8-9cc4-51f20cf3068e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.321996 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b12d3d55-4a75-467d-ab67-e1b30e673183-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322014 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6c612c8-92ec-4052-86ee-a1f340e70b04-metrics-tls\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322030 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322049 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322066 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-image-import-ca\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322083 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-serving-cert\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322106 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgsb6\" (UniqueName: \"kubernetes.io/projected/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-kube-api-access-dgsb6\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322122 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-service-ca-bundle\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322137 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-metrics-certs\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322172 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddxqx\" (UniqueName: \"kubernetes.io/projected/6f358f9a-4142-4a01-b23b-2c086a9a78fc-kube-api-access-ddxqx\") pod \"cluster-samples-operator-665b6dd947-6lmxk\" (UID: \"6f358f9a-4142-4a01-b23b-2c086a9a78fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322190 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9dda51-d5d7-46e1-886d-865955b5bd39-serving-cert\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322208 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6c612c8-92ec-4052-86ee-a1f340e70b04-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322225 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f358f9a-4142-4a01-b23b-2c086a9a78fc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6lmxk\" (UID: \"6f358f9a-4142-4a01-b23b-2c086a9a78fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322243 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58246f9-2537-42e2-af7b-8db153b987aa-audit-dir\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322267 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-serving-cert\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322284 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12d3d55-4a75-467d-ab67-e1b30e673183-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322301 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-dir\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322317 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322334 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-config\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322351 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322375 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpk4w\" (UniqueName: \"kubernetes.io/projected/e8779368-6bce-4ab6-b3e0-3566175db496-kube-api-access-zpk4w\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322398 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-encryption-config\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322420 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-259wb\" (UniqueName: \"kubernetes.io/projected/d41d7513-cd63-4320-907a-51d6e48fa9e0-kube-api-access-259wb\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322437 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w9nz\" (UniqueName: \"kubernetes.io/projected/f2267d30-75c6-4002-ae56-b623dc6d7e42-kube-api-access-2w9nz\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322456 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-client-ca\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322479 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-profile-collector-cert\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322505 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a8c87a-d4b3-443b-af72-412d3eb74754-serving-cert\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322556 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322580 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jh8bg\" (UniqueName: \"kubernetes.io/projected/cd9dda51-d5d7-46e1-886d-865955b5bd39-kube-api-access-jh8bg\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322605 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-encryption-config\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322627 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-etcd-client\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322647 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/08f9bfd0-2121-4159-b0aa-41f5cc539aae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322800 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-27726"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.322820 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.324756 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325718 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3224bd1-c3c3-434c-9549-2b6e7a20f9a2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mqqn\" (UID: \"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325754 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72bz\" (UniqueName: \"kubernetes.io/projected/f3224bd1-c3c3-434c-9549-2b6e7a20f9a2-kube-api-access-z72bz\") pod \"multus-admission-controller-857f4d67dd-9mqqn\" (UID: \"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325778 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d58246f9-2537-42e2-af7b-8db153b987aa-node-pullsecrets\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325798 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-client-ca\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325826 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0b59dab-c4d7-4baa-9811-f29d7b19be0b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-65v4s\" (UID: \"e0b59dab-c4d7-4baa-9811-f29d7b19be0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325859 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlvz5\" (UniqueName: \"kubernetes.io/projected/4eac6509-7889-4976-bcc4-bf65486c098f-kube-api-access-vlvz5\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325895 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b4tk\" (UniqueName: \"kubernetes.io/projected/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-kube-api-access-9b4tk\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325928 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d41d7513-cd63-4320-907a-51d6e48fa9e0-audit-dir\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325962 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.325989 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkds\" (UniqueName: \"kubernetes.io/projected/8d8a711c-0969-4002-8b2f-84acf31ea060-kube-api-access-tpkds\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326031 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-policies\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326055 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqc9\" (UniqueName: \"kubernetes.io/projected/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-kube-api-access-7zqc9\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326080 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08f9bfd0-2121-4159-b0aa-41f5cc539aae-serving-cert\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326111 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-etcd-client\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-serving-cert\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326160 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-audit-policies\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326424 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684028c5-aab7-4020-8ccd-b1f7b575f59d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326464 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02c9b08a-94e3-4606-8fbf-188005bbd87d-trusted-ca\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326487 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-config\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326510 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326548 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326574 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d8a711c-0969-4002-8b2f-84acf31ea060-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326599 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f677e24a-6a65-4cc4-8653-6ef411944dfd-auth-proxy-config\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326619 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326641 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-serving-cert\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5f4522-63a6-4fdc-add5-e75832a54c98-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326677 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326700 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp2jc\" (UniqueName: \"kubernetes.io/projected/b12d3d55-4a75-467d-ab67-e1b30e673183-kube-api-access-mp2jc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326726 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.326747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea5f4522-63a6-4fdc-add5-e75832a54c98-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.327907 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-trusted-ca-bundle\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.328453 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-console-config\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.328577 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-k2nnd"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.329106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d58246f9-2537-42e2-af7b-8db153b987aa-audit-dir\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.329284 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d41d7513-cd63-4320-907a-51d6e48fa9e0-audit-dir\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.329874 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.330143 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.331055 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.330250 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/d58246f9-2537-42e2-af7b-8db153b987aa-node-pullsecrets\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.330308 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-config\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.329878 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f677e24a-6a65-4cc4-8653-6ef411944dfd-config\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.330773 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-policies\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.331204 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-client-ca\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.330211 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.332187 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-service-ca\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.332305 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-service-ca-bundle\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.333052 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f2267d30-75c6-4002-ae56-b623dc6d7e42-images\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.333185 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02c9b08a-94e3-4606-8fbf-188005bbd87d-config\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.333539 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-oauth-serving-cert\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.333837 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-image-import-ca\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.333858 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-audit\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.333846 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/08f9bfd0-2121-4159-b0aa-41f5cc539aae-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.334222 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.334319 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.334487 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-encryption-config\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.334841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2267d30-75c6-4002-ae56-b623dc6d7e42-config\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.335488 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-dir\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.335971 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e2a8c87a-d4b3-443b-af72-412d3eb74754-serving-cert\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.335504 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-client-ca\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.336195 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-config\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.336366 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-serving-cert\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.336376 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tnccw"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.336587 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-etcd-client\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.336631 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6f358f9a-4142-4a01-b23b-2c086a9a78fc-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6lmxk\" (UID: \"6f358f9a-4142-4a01-b23b-2c086a9a78fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.336895 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ea5f4522-63a6-4fdc-add5-e75832a54c98-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.337953 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-config\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.338315 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tp67v"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.338356 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k2nnd"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.338472 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.338610 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9dda51-d5d7-46e1-886d-865955b5bd39-serving-cert\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.338888 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.339239 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-config\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.339548 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ea5f4522-63a6-4fdc-add5-e75832a54c98-config\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.339862 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08f9bfd0-2121-4159-b0aa-41f5cc539aae-serving-cert\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340046 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2a8c87a-d4b3-443b-af72-412d3eb74754-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340071 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-etcd-serving-ca\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340439 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02c9b08a-94e3-4606-8fbf-188005bbd87d-serving-cert\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340591 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d58246f9-2537-42e2-af7b-8db153b987aa-trusted-ca-bundle\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340610 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-serving-cert\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340683 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f677e24a-6a65-4cc4-8653-6ef411944dfd-machine-approver-tls\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.340725 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.341161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/f2267d30-75c6-4002-ae56-b623dc6d7e42-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.341580 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.341919 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-etcd-client\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.342459 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.342511 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2sdhl"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.342534 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.342731 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.342966 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.343050 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.343335 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.343591 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d58246f9-2537-42e2-af7b-8db153b987aa-serving-cert\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.343875 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.344137 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2sdhl"] Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.347366 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.352342 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f677e24a-6a65-4cc4-8653-6ef411944dfd-auth-proxy-config\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.367831 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.387329 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.401680 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.402240 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d41d7513-cd63-4320-907a-51d6e48fa9e0-encryption-config\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.402258 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.402325 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d41d7513-cd63-4320-907a-51d6e48fa9e0-audit-policies\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.404213 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02c9b08a-94e3-4606-8fbf-188005bbd87d-trusted-ca\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.404921 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-oauth-config\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.406882 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.409298 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-serving-cert\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429155 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lzl7\" (UniqueName: \"kubernetes.io/projected/a6c612c8-92ec-4052-86ee-a1f340e70b04-kube-api-access-7lzl7\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429428 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c082f9-de6d-4efb-970d-8497d39ab890-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429484 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d8a711c-0969-4002-8b2f-84acf31ea060-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429533 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6c612c8-92ec-4052-86ee-a1f340e70b04-trusted-ca\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429555 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-stats-auth\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429573 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-srv-cert\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429598 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8779368-6bce-4ab6-b3e0-3566175db496-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429620 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684028c5-aab7-4020-8ccd-b1f7b575f59d-config\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429655 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684028c5-aab7-4020-8ccd-b1f7b575f59d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429679 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429701 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c082f9-de6d-4efb-970d-8497d39ab890-config\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429723 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8779368-6bce-4ab6-b3e0-3566175db496-proxy-tls\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429745 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hf9c\" (UniqueName: \"kubernetes.io/projected/3339365c-8f70-47e8-9cc4-51f20cf3068e-kube-api-access-9hf9c\") pod \"migrator-59844c95c7-fr5b7\" (UID: \"3339365c-8f70-47e8-9cc4-51f20cf3068e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429767 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b12d3d55-4a75-467d-ab67-e1b30e673183-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429791 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6c612c8-92ec-4052-86ee-a1f340e70b04-metrics-tls\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429812 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429850 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgsb6\" (UniqueName: \"kubernetes.io/projected/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-kube-api-access-dgsb6\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429871 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-service-ca-bundle\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429893 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-metrics-certs\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.429920 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6c612c8-92ec-4052-86ee-a1f340e70b04-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430010 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12d3d55-4a75-467d-ab67-e1b30e673183-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430042 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430074 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpk4w\" (UniqueName: \"kubernetes.io/projected/e8779368-6bce-4ab6-b3e0-3566175db496-kube-api-access-zpk4w\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430103 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-profile-collector-cert\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3224bd1-c3c3-434c-9549-2b6e7a20f9a2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mqqn\" (UID: \"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430160 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z72bz\" (UniqueName: \"kubernetes.io/projected/f3224bd1-c3c3-434c-9549-2b6e7a20f9a2-kube-api-access-z72bz\") pod \"multus-admission-controller-857f4d67dd-9mqqn\" (UID: \"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430187 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0b59dab-c4d7-4baa-9811-f29d7b19be0b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-65v4s\" (UID: \"e0b59dab-c4d7-4baa-9811-f29d7b19be0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430224 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b4tk\" (UniqueName: \"kubernetes.io/projected/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-kube-api-access-9b4tk\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430251 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkds\" (UniqueName: \"kubernetes.io/projected/8d8a711c-0969-4002-8b2f-84acf31ea060-kube-api-access-tpkds\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430283 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684028c5-aab7-4020-8ccd-b1f7b575f59d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430307 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430332 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d8a711c-0969-4002-8b2f-84acf31ea060-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430366 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430391 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp2jc\" (UniqueName: \"kubernetes.io/projected/b12d3d55-4a75-467d-ab67-e1b30e673183-kube-api-access-mp2jc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430434 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntwdk\" (UniqueName: \"kubernetes.io/projected/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-kube-api-access-ntwdk\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430467 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-default-certificate\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430511 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c082f9-de6d-4efb-970d-8497d39ab890-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430533 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d8a711c-0969-4002-8b2f-84acf31ea060-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430559 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9jz7\" (UniqueName: \"kubernetes.io/projected/e0b59dab-c4d7-4baa-9811-f29d7b19be0b-kube-api-access-d9jz7\") pod \"control-plane-machine-set-operator-78cbb6b69f-65v4s\" (UID: \"e0b59dab-c4d7-4baa-9811-f29d7b19be0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430581 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-config\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430612 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrfh5\" (UniqueName: \"kubernetes.io/projected/14645e18-5ae5-40f7-b52f-591a49032bc0-kube-api-access-jrfh5\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430635 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-serving-cert\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.430732 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8779368-6bce-4ab6-b3e0-3566175db496-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.431461 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73c082f9-de6d-4efb-970d-8497d39ab890-config\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.431743 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8d8a711c-0969-4002-8b2f-84acf31ea060-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.433657 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73c082f9-de6d-4efb-970d-8497d39ab890-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.435972 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/8d8a711c-0969-4002-8b2f-84acf31ea060-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.446418 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.467225 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.486597 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.507467 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.527751 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.547561 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.568537 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.575402 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.587666 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.606850 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.613938 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.627160 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.666555 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.688097 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.692243 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684028c5-aab7-4020-8ccd-b1f7b575f59d-config\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.707145 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.727440 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.735231 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684028c5-aab7-4020-8ccd-b1f7b575f59d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.747619 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.766981 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.776648 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/f3224bd1-c3c3-434c-9549-2b6e7a20f9a2-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-9mqqn\" (UID: \"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.787716 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.807707 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.828147 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.847670 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.856397 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b12d3d55-4a75-467d-ab67-e1b30e673183-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.867823 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.873293 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b12d3d55-4a75-467d-ab67-e1b30e673183-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.887372 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.907148 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.918408 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.926937 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.947441 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.979270 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.983405 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:47 crc kubenswrapper[4970]: I0930 09:48:47.987404 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.018572 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.027556 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.037672 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-srv-cert\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.047852 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.059148 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-profile-collector-cert\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.068570 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.088121 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.107483 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.119402 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8779368-6bce-4ab6-b3e0-3566175db496-proxy-tls\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.127578 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.148288 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.157260 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0b59dab-c4d7-4baa-9811-f29d7b19be0b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-65v4s\" (UID: \"e0b59dab-c4d7-4baa-9811-f29d7b19be0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.167836 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.188478 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.219278 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.222517 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6c612c8-92ec-4052-86ee-a1f340e70b04-trusted-ca\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.227895 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.245055 4970 request.go:700] Waited for 1.003206903s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0 Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.247383 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.257837 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a6c612c8-92ec-4052-86ee-a1f340e70b04-metrics-tls\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.268071 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.287667 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.308399 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.328029 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.333289 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-service-ca-bundle\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.347716 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.367245 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.378796 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-metrics-certs\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.388297 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.398435 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-default-certificate\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.408329 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.417134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-stats-auth\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.428141 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: E0930 09:48:48.430853 4970 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Sep 30 09:48:48 crc kubenswrapper[4970]: E0930 09:48:48.431021 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-serving-cert podName:e80c8be2-53ae-4ddc-ab06-59133d53f4eb nodeName:}" failed. No retries permitted until 2025-09-30 09:48:48.930948763 +0000 UTC m=+142.002799737 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-serving-cert") pod "service-ca-operator-777779d784-tp67v" (UID: "e80c8be2-53ae-4ddc-ab06-59133d53f4eb") : failed to sync secret cache: timed out waiting for the condition Sep 30 09:48:48 crc kubenswrapper[4970]: E0930 09:48:48.432419 4970 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Sep 30 09:48:48 crc kubenswrapper[4970]: E0930 09:48:48.432527 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-config podName:e80c8be2-53ae-4ddc-ab06-59133d53f4eb nodeName:}" failed. No retries permitted until 2025-09-30 09:48:48.932498343 +0000 UTC m=+142.004349317 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-config") pod "service-ca-operator-777779d784-tp67v" (UID: "e80c8be2-53ae-4ddc-ab06-59133d53f4eb") : failed to sync configmap cache: timed out waiting for the condition Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.446504 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.468158 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.488503 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.507371 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.528772 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.548860 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.568016 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.587853 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.607752 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.647057 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.668011 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.688915 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.707444 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.728674 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.747981 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.767074 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.787983 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.808101 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.828332 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.846853 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.868871 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.888694 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.908589 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.951159 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddxqx\" (UniqueName: \"kubernetes.io/projected/6f358f9a-4142-4a01-b23b-2c086a9a78fc-kube-api-access-ddxqx\") pod \"cluster-samples-operator-665b6dd947-6lmxk\" (UID: \"6f358f9a-4142-4a01-b23b-2c086a9a78fc\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.957431 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-config\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.958234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-serving-cert\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.958875 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-config\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.964787 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-serving-cert\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:48 crc kubenswrapper[4970]: I0930 09:48:48.965410 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67zz8\" (UniqueName: \"kubernetes.io/projected/e2a8c87a-d4b3-443b-af72-412d3eb74754-kube-api-access-67zz8\") pod \"authentication-operator-69f744f599-6r7rb\" (UID: \"e2a8c87a-d4b3-443b-af72-412d3eb74754\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:48.995395 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jh8bg\" (UniqueName: \"kubernetes.io/projected/cd9dda51-d5d7-46e1-886d-865955b5bd39-kube-api-access-jh8bg\") pod \"route-controller-manager-6576b87f9c-zwd8r\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.016541 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p2r4\" (UniqueName: \"kubernetes.io/projected/ea5f4522-63a6-4fdc-add5-e75832a54c98-kube-api-access-2p2r4\") pod \"openshift-apiserver-operator-796bbdcf4f-5x9pd\" (UID: \"ea5f4522-63a6-4fdc-add5-e75832a54c98\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.025728 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxdtk\" (UniqueName: \"kubernetes.io/projected/f677e24a-6a65-4cc4-8653-6ef411944dfd-kube-api-access-gxdtk\") pod \"machine-approver-56656f9798-49sh9\" (UID: \"f677e24a-6a65-4cc4-8653-6ef411944dfd\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.027423 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.035620 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.050382 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.067329 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.070458 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.086206 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.089084 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Sep 30 09:48:49 crc kubenswrapper[4970]: W0930 09:48:49.102969 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf677e24a_6a65_4cc4_8653_6ef411944dfd.slice/crio-ebf272247a7cf8ddf815500481c0ee22259a3d7ad2e4fa5a0fc9992cb9ef6862 WatchSource:0}: Error finding container ebf272247a7cf8ddf815500481c0ee22259a3d7ad2e4fa5a0fc9992cb9ef6862: Status 404 returned error can't find the container with id ebf272247a7cf8ddf815500481c0ee22259a3d7ad2e4fa5a0fc9992cb9ef6862 Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.127809 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j777q\" (UniqueName: \"kubernetes.io/projected/d58246f9-2537-42e2-af7b-8db153b987aa-kube-api-access-j777q\") pod \"apiserver-76f77b778f-k4ph9\" (UID: \"d58246f9-2537-42e2-af7b-8db153b987aa\") " pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.148263 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqc9\" (UniqueName: \"kubernetes.io/projected/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-kube-api-access-7zqc9\") pod \"controller-manager-879f6c89f-nmtqv\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.178339 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbdf\" (UniqueName: \"kubernetes.io/projected/200e46f5-be36-4a88-85d0-fb279eba20c5-kube-api-access-sbbdf\") pod \"downloads-7954f5f757-f7zdf\" (UID: \"200e46f5-be36-4a88-85d0-fb279eba20c5\") " pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.189249 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.193231 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78m26\" (UniqueName: \"kubernetes.io/projected/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-kube-api-access-78m26\") pod \"oauth-openshift-558db77b4-gtdq5\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.212136 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zbzf\" (UniqueName: \"kubernetes.io/projected/02c9b08a-94e3-4606-8fbf-188005bbd87d-kube-api-access-6zbzf\") pod \"console-operator-58897d9998-tqp6b\" (UID: \"02c9b08a-94e3-4606-8fbf-188005bbd87d\") " pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.218201 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.226488 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l24c\" (UniqueName: \"kubernetes.io/projected/08f9bfd0-2121-4159-b0aa-41f5cc539aae-kube-api-access-6l24c\") pod \"openshift-config-operator-7777fb866f-cm6p7\" (UID: \"08f9bfd0-2121-4159-b0aa-41f5cc539aae\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.242698 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w9nz\" (UniqueName: \"kubernetes.io/projected/f2267d30-75c6-4002-ae56-b623dc6d7e42-kube-api-access-2w9nz\") pod \"machine-api-operator-5694c8668f-b78lb\" (UID: \"f2267d30-75c6-4002-ae56-b623dc6d7e42\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.245454 4970 request.go:700] Waited for 1.909291277s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.257018 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.266330 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlvz5\" (UniqueName: \"kubernetes.io/projected/4eac6509-7889-4976-bcc4-bf65486c098f-kube-api-access-vlvz5\") pod \"console-f9d7485db-c8bs2\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.266509 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.280213 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-259wb\" (UniqueName: \"kubernetes.io/projected/d41d7513-cd63-4320-907a-51d6e48fa9e0-kube-api-access-259wb\") pod \"apiserver-7bbb656c7d-x2m28\" (UID: \"d41d7513-cd63-4320-907a-51d6e48fa9e0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.287794 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.308350 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk"] Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.312087 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.315214 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r"] Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.326123 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.328488 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.348328 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.349158 4970 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.355111 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.359219 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.367433 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.378669 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.393578 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.393832 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.400881 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.422830 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lzl7\" (UniqueName: \"kubernetes.io/projected/a6c612c8-92ec-4052-86ee-a1f340e70b04-kube-api-access-7lzl7\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.459560 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z72bz\" (UniqueName: \"kubernetes.io/projected/f3224bd1-c3c3-434c-9549-2b6e7a20f9a2-kube-api-access-z72bz\") pod \"multus-admission-controller-857f4d67dd-9mqqn\" (UID: \"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.476216 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hf9c\" (UniqueName: \"kubernetes.io/projected/3339365c-8f70-47e8-9cc4-51f20cf3068e-kube-api-access-9hf9c\") pod \"migrator-59844c95c7-fr5b7\" (UID: \"3339365c-8f70-47e8-9cc4-51f20cf3068e\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.485868 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgsb6\" (UniqueName: \"kubernetes.io/projected/f9105a16-9a94-4ae6-b78d-9eb3b7b0535b-kube-api-access-dgsb6\") pod \"router-default-5444994796-mnwkr\" (UID: \"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b\") " pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.500007 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-k4ph9"] Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.523188 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a6c612c8-92ec-4052-86ee-a1f340e70b04-bound-sa-token\") pod \"ingress-operator-5b745b69d9-mml8f\" (UID: \"a6c612c8-92ec-4052-86ee-a1f340e70b04\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.527332 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b4tk\" (UniqueName: \"kubernetes.io/projected/e80c8be2-53ae-4ddc-ab06-59133d53f4eb-kube-api-access-9b4tk\") pod \"service-ca-operator-777779d784-tp67v\" (UID: \"e80c8be2-53ae-4ddc-ab06-59133d53f4eb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.534031 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.553653 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkds\" (UniqueName: \"kubernetes.io/projected/8d8a711c-0969-4002-8b2f-84acf31ea060-kube-api-access-tpkds\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.572123 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c98d2b38-fbe9-4f9f-acce-e7e34dc123e6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-ktnvq\" (UID: \"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.579650 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-6r7rb"] Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.580170 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.586030 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684028c5-aab7-4020-8ccd-b1f7b575f59d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-mdq8d\" (UID: \"684028c5-aab7-4020-8ccd-b1f7b575f59d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.586289 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.594421 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.599999 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.618288 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-tqp6b"] Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.620259 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpk4w\" (UniqueName: \"kubernetes.io/projected/e8779368-6bce-4ab6-b3e0-3566175db496-kube-api-access-zpk4w\") pod \"machine-config-controller-84d6567774-rngv8\" (UID: \"e8779368-6bce-4ab6-b3e0-3566175db496\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.625539 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73c082f9-de6d-4efb-970d-8497d39ab890-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-xdf86\" (UID: \"73c082f9-de6d-4efb-970d-8497d39ab890\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.642756 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp2jc\" (UniqueName: \"kubernetes.io/projected/b12d3d55-4a75-467d-ab67-e1b30e673183-kube-api-access-mp2jc\") pod \"kube-storage-version-migrator-operator-b67b599dd-6dc7k\" (UID: \"b12d3d55-4a75-467d-ab67-e1b30e673183\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.654240 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" event={"ID":"cd9dda51-d5d7-46e1-886d-865955b5bd39","Type":"ContainerStarted","Data":"8704e70dff1fe4e24ba4a4b0e05fad68fc671ff00d94de39d910687a1ee205dc"} Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.654328 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" event={"ID":"cd9dda51-d5d7-46e1-886d-865955b5bd39","Type":"ContainerStarted","Data":"3b2c98bf5dc524fcd6fef681f27c749b0d8254b9a106792b719ccda2ac7d6daf"} Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.654585 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.656229 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" event={"ID":"f677e24a-6a65-4cc4-8653-6ef411944dfd","Type":"ContainerStarted","Data":"2213989c1f109c969acaeb25694bd83824a72018fb1ce5434b91a3cc6e841481"} Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.656255 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" event={"ID":"f677e24a-6a65-4cc4-8653-6ef411944dfd","Type":"ContainerStarted","Data":"ebf272247a7cf8ddf815500481c0ee22259a3d7ad2e4fa5a0fc9992cb9ef6862"} Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.656905 4970 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-zwd8r container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.656954 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" podUID="cd9dda51-d5d7-46e1-886d-865955b5bd39" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.662157 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" event={"ID":"d58246f9-2537-42e2-af7b-8db153b987aa","Type":"ContainerStarted","Data":"7f9de711281b3be45345a7b13d0dce15e7cd778eec3c4a55b9fe7a3cfbc80b37"} Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.664423 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" event={"ID":"6f358f9a-4142-4a01-b23b-2c086a9a78fc","Type":"ContainerStarted","Data":"6efd4b0c92657f209a04050c0c166397f1d565a9b7b80e36c3c7e08f55f6bb89"} Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.669858 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntwdk\" (UniqueName: \"kubernetes.io/projected/2f8b3127-d745-44c3-9170-9ebd73c5f2ea-kube-api-access-ntwdk\") pod \"catalog-operator-68c6474976-vd6gv\" (UID: \"2f8b3127-d745-44c3-9170-9ebd73c5f2ea\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.685886 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9jz7\" (UniqueName: \"kubernetes.io/projected/e0b59dab-c4d7-4baa-9811-f29d7b19be0b-kube-api-access-d9jz7\") pod \"control-plane-machine-set-operator-78cbb6b69f-65v4s\" (UID: \"e0b59dab-c4d7-4baa-9811-f29d7b19be0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.698792 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd"] Sep 30 09:48:49 crc kubenswrapper[4970]: W0930 09:48:49.706429 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9105a16_9a94_4ae6_b78d_9eb3b7b0535b.slice/crio-c59b9fb7b3b748054e7800f7a7763bdc5124f9702930223eac796ecc3a0cd65d WatchSource:0}: Error finding container c59b9fb7b3b748054e7800f7a7763bdc5124f9702930223eac796ecc3a0cd65d: Status 404 returned error can't find the container with id c59b9fb7b3b748054e7800f7a7763bdc5124f9702930223eac796ecc3a0cd65d Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.718652 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8d8a711c-0969-4002-8b2f-84acf31ea060-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2wvg\" (UID: \"8d8a711c-0969-4002-8b2f-84acf31ea060\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.722300 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrfh5\" (UniqueName: \"kubernetes.io/projected/14645e18-5ae5-40f7-b52f-591a49032bc0-kube-api-access-jrfh5\") pod \"marketplace-operator-79b997595-27726\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:49 crc kubenswrapper[4970]: W0930 09:48:49.747660 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea5f4522_63a6_4fdc_add5_e75832a54c98.slice/crio-bf2728f1871a7266e967492b5162b500e87bf644b7d3e50256f84f16743d3f20 WatchSource:0}: Error finding container bf2728f1871a7266e967492b5162b500e87bf644b7d3e50256f84f16743d3f20: Status 404 returned error can't find the container with id bf2728f1871a7266e967492b5162b500e87bf644b7d3e50256f84f16743d3f20 Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771335 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/54dbd8a8-c111-4c80-9ab3-84ddc9531458-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771375 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0edb6aa0-1020-44cf-b155-bb02e5ae4fd4-metrics-tls\") pod \"dns-operator-744455d44c-qmlst\" (UID: \"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771401 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-config\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771428 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/54dbd8a8-c111-4c80-9ab3-84ddc9531458-srv-cert\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771454 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-tls\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771474 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkhgl\" (UniqueName: \"kubernetes.io/projected/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-kube-api-access-rkhgl\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771497 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzwb\" (UniqueName: \"kubernetes.io/projected/0edb6aa0-1020-44cf-b155-bb02e5ae4fd4-kube-api-access-brzwb\") pod \"dns-operator-744455d44c-qmlst\" (UID: \"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771519 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-certificates\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771541 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771578 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f2c73d3-00d3-491e-8050-fe9f69126993-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771619 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfq9c\" (UniqueName: \"kubernetes.io/projected/54dbd8a8-c111-4c80-9ab3-84ddc9531458-kube-api-access-zfq9c\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771653 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-trusted-ca\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771684 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-client\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771706 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-bound-sa-token\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771728 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-ca\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.771938 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq4b6\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-kube-api-access-mq4b6\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.772114 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86652\" (UniqueName: \"kubernetes.io/projected/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-kube-api-access-86652\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.772164 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-service-ca\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.772203 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.772291 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.772324 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f2c73d3-00d3-491e-8050-fe9f69126993-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.772352 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-serving-cert\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: E0930 09:48:49.779469 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.279446902 +0000 UTC m=+143.351297836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.783834 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.799398 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.802484 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.845689 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.852563 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.856968 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.864819 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.871217 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.873741 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.873972 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/54dbd8a8-c111-4c80-9ab3-84ddc9531458-srv-cert\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874018 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-tls\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874038 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkhgl\" (UniqueName: \"kubernetes.io/projected/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-kube-api-access-rkhgl\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874058 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzwb\" (UniqueName: \"kubernetes.io/projected/0edb6aa0-1020-44cf-b155-bb02e5ae4fd4-kube-api-access-brzwb\") pod \"dns-operator-744455d44c-qmlst\" (UID: \"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874097 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crsl4\" (UniqueName: \"kubernetes.io/projected/f1e25561-aedc-4396-8a32-03e4a401274b-kube-api-access-crsl4\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874134 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-certificates\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874154 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874171 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvfq\" (UniqueName: \"kubernetes.io/projected/e980860d-2007-40b5-a3d6-443396650e2d-kube-api-access-kcvfq\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874227 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d54f4ba0-36ce-4907-b64a-931857c06d30-signing-cabundle\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874263 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-socket-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874288 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b991f6b0-8b44-424d-b082-b753223ffcd5-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874380 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f2c73d3-00d3-491e-8050-fe9f69126993-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874400 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkczq\" (UniqueName: \"kubernetes.io/projected/94e420ce-a1c2-422a-aaba-0553236d7e6a-kube-api-access-gkczq\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874429 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/883289d1-ad8e-470a-8779-41fabbbee527-cert\") pod \"ingress-canary-k2nnd\" (UID: \"883289d1-ad8e-470a-8779-41fabbbee527\") " pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874487 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfq9c\" (UniqueName: \"kubernetes.io/projected/54dbd8a8-c111-4c80-9ab3-84ddc9531458-kube-api-access-zfq9c\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: E0930 09:48:49.874576 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.374523628 +0000 UTC m=+143.446374562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874688 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-mountpoint-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874760 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b5e101-ca8e-4284-8302-b01361523ccd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sm49z\" (UID: \"c6b5e101-ca8e-4284-8302-b01361523ccd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874918 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-trusted-ca\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.874956 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs95c\" (UniqueName: \"kubernetes.io/projected/d54f4ba0-36ce-4907-b64a-931857c06d30-kube-api-access-zs95c\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875065 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1e25561-aedc-4396-8a32-03e4a401274b-certs\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875087 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b991f6b0-8b44-424d-b082-b753223ffcd5-webhook-cert\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875134 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bfn\" (UniqueName: \"kubernetes.io/projected/7535af89-756e-4e84-b9f3-246296ca252e-kube-api-access-d5bfn\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875169 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7535af89-756e-4e84-b9f3-246296ca252e-secret-volume\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875199 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b991f6b0-8b44-424d-b082-b753223ffcd5-tmpfs\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-client\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875253 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nwv7\" (UniqueName: \"kubernetes.io/projected/c6b5e101-ca8e-4284-8302-b01361523ccd-kube-api-access-5nwv7\") pod \"package-server-manager-789f6589d5-sm49z\" (UID: \"c6b5e101-ca8e-4284-8302-b01361523ccd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875297 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-bound-sa-token\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875317 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-ca\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875336 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx8l5\" (UniqueName: \"kubernetes.io/projected/b991f6b0-8b44-424d-b082-b753223ffcd5-kube-api-access-hx8l5\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875380 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-registration-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1e25561-aedc-4396-8a32-03e4a401274b-node-bootstrap-token\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875472 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq4b6\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-kube-api-access-mq4b6\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86652\" (UniqueName: \"kubernetes.io/projected/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-kube-api-access-86652\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875544 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e980860d-2007-40b5-a3d6-443396650e2d-config-volume\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875575 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-service-ca\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875634 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875653 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7535af89-756e-4e84-b9f3-246296ca252e-config-volume\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875669 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34642044-e265-4f47-8cf4-d97eabb78e01-auth-proxy-config\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875696 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-plugins-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875741 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d54f4ba0-36ce-4907-b64a-931857c06d30-signing-key\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875826 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875845 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34642044-e265-4f47-8cf4-d97eabb78e01-images\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875864 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34642044-e265-4f47-8cf4-d97eabb78e01-proxy-tls\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875916 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f2c73d3-00d3-491e-8050-fe9f69126993-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875933 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e980860d-2007-40b5-a3d6-443396650e2d-metrics-tls\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.875973 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-serving-cert\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.876009 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-csi-data-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.876063 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/54dbd8a8-c111-4c80-9ab3-84ddc9531458-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.876081 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0edb6aa0-1020-44cf-b155-bb02e5ae4fd4-metrics-tls\") pod \"dns-operator-744455d44c-qmlst\" (UID: \"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.876154 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs259\" (UniqueName: \"kubernetes.io/projected/883289d1-ad8e-470a-8779-41fabbbee527-kube-api-access-zs259\") pod \"ingress-canary-k2nnd\" (UID: \"883289d1-ad8e-470a-8779-41fabbbee527\") " pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.877041 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-trusted-ca\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.878194 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-ca\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.879199 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-certificates\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.879961 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f2c73d3-00d3-491e-8050-fe9f69126993-ca-trust-extracted\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.880533 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-service-ca\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: E0930 09:48:49.880819 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.380801449 +0000 UTC m=+143.452652383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.881621 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.882339 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/54dbd8a8-c111-4c80-9ab3-84ddc9531458-srv-cert\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.882957 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-config\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.883018 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-config\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.883450 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gx52\" (UniqueName: \"kubernetes.io/projected/34642044-e265-4f47-8cf4-d97eabb78e01-kube-api-access-5gx52\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.885553 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-tls\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.886240 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f2c73d3-00d3-491e-8050-fe9f69126993-installation-pull-secrets\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.887089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.889410 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-etcd-client\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.889790 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/0edb6aa0-1020-44cf-b155-bb02e5ae4fd4-metrics-tls\") pod \"dns-operator-744455d44c-qmlst\" (UID: \"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.894803 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/54dbd8a8-c111-4c80-9ab3-84ddc9531458-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.901636 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzwb\" (UniqueName: \"kubernetes.io/projected/0edb6aa0-1020-44cf-b155-bb02e5ae4fd4-kube-api-access-brzwb\") pod \"dns-operator-744455d44c-qmlst\" (UID: \"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4\") " pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.903841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-serving-cert\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.925781 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfq9c\" (UniqueName: \"kubernetes.io/projected/54dbd8a8-c111-4c80-9ab3-84ddc9531458-kube-api-access-zfq9c\") pod \"olm-operator-6b444d44fb-hsstc\" (UID: \"54dbd8a8-c111-4c80-9ab3-84ddc9531458\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.964112 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkhgl\" (UniqueName: \"kubernetes.io/projected/8bdf5e2d-b2de-4da5-b6b5-166eada552ae-kube-api-access-rkhgl\") pod \"etcd-operator-b45778765-f4xmt\" (UID: \"8bdf5e2d-b2de-4da5-b6b5-166eada552ae\") " pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.966463 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-bound-sa-token\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.987311 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.987784 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7535af89-756e-4e84-b9f3-246296ca252e-config-volume\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: E0930 09:48:49.987835 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.48780851 +0000 UTC m=+143.559659444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.987863 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34642044-e265-4f47-8cf4-d97eabb78e01-auth-proxy-config\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.987913 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-plugins-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.987936 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d54f4ba0-36ce-4907-b64a-931857c06d30-signing-key\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.987968 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988009 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34642044-e265-4f47-8cf4-d97eabb78e01-images\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988037 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34642044-e265-4f47-8cf4-d97eabb78e01-proxy-tls\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988065 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e980860d-2007-40b5-a3d6-443396650e2d-metrics-tls\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988094 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-csi-data-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988146 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs259\" (UniqueName: \"kubernetes.io/projected/883289d1-ad8e-470a-8779-41fabbbee527-kube-api-access-zs259\") pod \"ingress-canary-k2nnd\" (UID: \"883289d1-ad8e-470a-8779-41fabbbee527\") " pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988177 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gx52\" (UniqueName: \"kubernetes.io/projected/34642044-e265-4f47-8cf4-d97eabb78e01-kube-api-access-5gx52\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988212 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crsl4\" (UniqueName: \"kubernetes.io/projected/f1e25561-aedc-4396-8a32-03e4a401274b-kube-api-access-crsl4\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988237 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcvfq\" (UniqueName: \"kubernetes.io/projected/e980860d-2007-40b5-a3d6-443396650e2d-kube-api-access-kcvfq\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988268 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d54f4ba0-36ce-4907-b64a-931857c06d30-signing-cabundle\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988294 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-socket-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988318 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b991f6b0-8b44-424d-b082-b753223ffcd5-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988358 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkczq\" (UniqueName: \"kubernetes.io/projected/94e420ce-a1c2-422a-aaba-0553236d7e6a-kube-api-access-gkczq\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988386 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/883289d1-ad8e-470a-8779-41fabbbee527-cert\") pod \"ingress-canary-k2nnd\" (UID: \"883289d1-ad8e-470a-8779-41fabbbee527\") " pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:49 crc kubenswrapper[4970]: E0930 09:48:49.988412 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.488394175 +0000 UTC m=+143.560245109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988490 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-mountpoint-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988516 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b5e101-ca8e-4284-8302-b01361523ccd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sm49z\" (UID: \"c6b5e101-ca8e-4284-8302-b01361523ccd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988561 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zs95c\" (UniqueName: \"kubernetes.io/projected/d54f4ba0-36ce-4907-b64a-931857c06d30-kube-api-access-zs95c\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988577 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1e25561-aedc-4396-8a32-03e4a401274b-certs\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988593 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b991f6b0-8b44-424d-b082-b753223ffcd5-webhook-cert\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988617 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bfn\" (UniqueName: \"kubernetes.io/projected/7535af89-756e-4e84-b9f3-246296ca252e-kube-api-access-d5bfn\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988635 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7535af89-756e-4e84-b9f3-246296ca252e-secret-volume\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b991f6b0-8b44-424d-b082-b753223ffcd5-tmpfs\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988669 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nwv7\" (UniqueName: \"kubernetes.io/projected/c6b5e101-ca8e-4284-8302-b01361523ccd-kube-api-access-5nwv7\") pod \"package-server-manager-789f6589d5-sm49z\" (UID: \"c6b5e101-ca8e-4284-8302-b01361523ccd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988692 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx8l5\" (UniqueName: \"kubernetes.io/projected/b991f6b0-8b44-424d-b082-b753223ffcd5-kube-api-access-hx8l5\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988718 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-registration-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988738 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1e25561-aedc-4396-8a32-03e4a401274b-node-bootstrap-token\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988786 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e980860d-2007-40b5-a3d6-443396650e2d-config-volume\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.989715 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/34642044-e265-4f47-8cf4-d97eabb78e01-auth-proxy-config\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.989776 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e980860d-2007-40b5-a3d6-443396650e2d-config-volume\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.989864 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7535af89-756e-4e84-b9f3-246296ca252e-config-volume\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.990013 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-registration-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.993149 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/b991f6b0-8b44-424d-b082-b753223ffcd5-tmpfs\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.994122 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d54f4ba0-36ce-4907-b64a-931857c06d30-signing-cabundle\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.988508 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-plugins-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.998211 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-mountpoint-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.998325 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-csi-data-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.998569 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/94e420ce-a1c2-422a-aaba-0553236d7e6a-socket-dir\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:49 crc kubenswrapper[4970]: I0930 09:48:49.998865 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/34642044-e265-4f47-8cf4-d97eabb78e01-images\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.017512 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.018915 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/883289d1-ad8e-470a-8779-41fabbbee527-cert\") pod \"ingress-canary-k2nnd\" (UID: \"883289d1-ad8e-470a-8779-41fabbbee527\") " pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.031724 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1e25561-aedc-4396-8a32-03e4a401274b-node-bootstrap-token\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.042224 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b991f6b0-8b44-424d-b082-b753223ffcd5-webhook-cert\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.044026 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6b5e101-ca8e-4284-8302-b01361523ccd-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sm49z\" (UID: \"c6b5e101-ca8e-4284-8302-b01361523ccd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.046618 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7535af89-756e-4e84-b9f3-246296ca252e-secret-volume\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.053750 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d54f4ba0-36ce-4907-b64a-931857c06d30-signing-key\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.063453 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e980860d-2007-40b5-a3d6-443396650e2d-metrics-tls\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.063564 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b991f6b0-8b44-424d-b082-b753223ffcd5-apiservice-cert\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.063639 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1e25561-aedc-4396-8a32-03e4a401274b-certs\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.063788 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/34642044-e265-4f47-8cf4-d97eabb78e01-proxy-tls\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.066747 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86652\" (UniqueName: \"kubernetes.io/projected/fdbeb479-93e2-4da6-a9d0-052a9f90ab9c-kube-api-access-86652\") pod \"openshift-controller-manager-operator-756b6f6bc6-rhdkf\" (UID: \"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.067582 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq4b6\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-kube-api-access-mq4b6\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.069318 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.074463 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.084391 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-f7zdf"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.085696 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bfn\" (UniqueName: \"kubernetes.io/projected/7535af89-756e-4e84-b9f3-246296ca252e-kube-api-access-d5bfn\") pod \"collect-profiles-29320425-lj624\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.088162 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.088833 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.090129 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.090571 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.590557283 +0000 UTC m=+143.662408217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.094963 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-c8bs2"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.097367 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.102790 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nmtqv"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.110347 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx8l5\" (UniqueName: \"kubernetes.io/projected/b991f6b0-8b44-424d-b082-b753223ffcd5-kube-api-access-hx8l5\") pod \"packageserver-d55dfcdfc-v7cmd\" (UID: \"b991f6b0-8b44-424d-b082-b753223ffcd5\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.112213 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gtdq5"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.128069 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nwv7\" (UniqueName: \"kubernetes.io/projected/c6b5e101-ca8e-4284-8302-b01361523ccd-kube-api-access-5nwv7\") pod \"package-server-manager-789f6589d5-sm49z\" (UID: \"c6b5e101-ca8e-4284-8302-b01361523ccd\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.133029 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crsl4\" (UniqueName: \"kubernetes.io/projected/f1e25561-aedc-4396-8a32-03e4a401274b-kube-api-access-crsl4\") pod \"machine-config-server-tnccw\" (UID: \"f1e25561-aedc-4396-8a32-03e4a401274b\") " pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.143726 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcvfq\" (UniqueName: \"kubernetes.io/projected/e980860d-2007-40b5-a3d6-443396650e2d-kube-api-access-kcvfq\") pod \"dns-default-svmb7\" (UID: \"e980860d-2007-40b5-a3d6-443396650e2d\") " pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.164971 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs95c\" (UniqueName: \"kubernetes.io/projected/d54f4ba0-36ce-4907-b64a-931857c06d30-kube-api-access-zs95c\") pod \"service-ca-9c57cc56f-cs6xx\" (UID: \"d54f4ba0-36ce-4907-b64a-931857c06d30\") " pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.192300 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.199523 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.699502864 +0000 UTC m=+143.771353798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.203786 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkczq\" (UniqueName: \"kubernetes.io/projected/94e420ce-a1c2-422a-aaba-0553236d7e6a-kube-api-access-gkczq\") pod \"csi-hostpathplugin-2sdhl\" (UID: \"94e420ce-a1c2-422a-aaba-0553236d7e6a\") " pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.212413 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.219173 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.224743 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.226557 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs259\" (UniqueName: \"kubernetes.io/projected/883289d1-ad8e-470a-8779-41fabbbee527-kube-api-access-zs259\") pod \"ingress-canary-k2nnd\" (UID: \"883289d1-ad8e-470a-8779-41fabbbee527\") " pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.233809 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.234887 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gx52\" (UniqueName: \"kubernetes.io/projected/34642044-e265-4f47-8cf4-d97eabb78e01-kube-api-access-5gx52\") pod \"machine-config-operator-74547568cd-x2cjp\" (UID: \"34642044-e265-4f47-8cf4-d97eabb78e01\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.239768 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.245022 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.263817 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.273060 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-k2nnd" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.283396 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tnccw" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.294400 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.295043 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.795014371 +0000 UTC m=+143.866865315 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.305692 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.306298 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-9mqqn"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.316035 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.323308 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-tp67v"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.343494 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-b78lb"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.396526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.396984 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.896966093 +0000 UTC m=+143.968817027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.497775 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.498113 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.998084094 +0000 UTC m=+144.069935028 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.498216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.498841 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:50.998834843 +0000 UTC m=+144.070685777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.599551 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.600170 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.100146309 +0000 UTC m=+144.171997243 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.603354 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.626855 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.672315 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" event={"ID":"6f358f9a-4142-4a01-b23b-2c086a9a78fc","Type":"ContainerStarted","Data":"ce06c2b32c52dfce26db99d89a6be91d6abaae148324396a2228bf6eae6eb727"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.673722 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" event={"ID":"6f358f9a-4142-4a01-b23b-2c086a9a78fc","Type":"ContainerStarted","Data":"1b0073690f4cf416e1c166be6f24c3ea27e88e6e465a9bf6864961a9d8f45b94"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.676276 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" event={"ID":"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e","Type":"ContainerStarted","Data":"9634f018872bbde8500db6d38082010fd73b161c1b239ef85f130a9298facb56"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.682152 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" event={"ID":"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f","Type":"ContainerStarted","Data":"f8d256c1d074f34b9aca1a574427e3feb774edaa99c65a5f8106e26001474d30"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.686930 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.689446 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c8bs2" event={"ID":"4eac6509-7889-4976-bcc4-bf65486c098f","Type":"ContainerStarted","Data":"43626e403f9ed4538e4bd77f9e10261899ad1cf9b69bb2133b096bae76419b2a"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.701423 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.702682 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.202661816 +0000 UTC m=+144.274512750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.706667 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mnwkr" event={"ID":"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b","Type":"ContainerStarted","Data":"4b0c491aa44fc2b43e09464cbc6a49571056447fc4de44e15e86b36cc171376f"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.706709 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-mnwkr" event={"ID":"f9105a16-9a94-4ae6-b78d-9eb3b7b0535b","Type":"ContainerStarted","Data":"c59b9fb7b3b748054e7800f7a7763bdc5124f9702930223eac796ecc3a0cd65d"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.740807 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" event={"ID":"f677e24a-6a65-4cc4-8653-6ef411944dfd","Type":"ContainerStarted","Data":"43bed4dc18660ce341348a7bd4c29587eb76c9f1c75f590877324c1f8750323d"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.750643 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" event={"ID":"ea5f4522-63a6-4fdc-add5-e75832a54c98","Type":"ContainerStarted","Data":"797e032a9a2183a54c2e4384923924e4c0257275bcd44cd937257bc34f4138b0"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.750821 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" event={"ID":"ea5f4522-63a6-4fdc-add5-e75832a54c98","Type":"ContainerStarted","Data":"bf2728f1871a7266e967492b5162b500e87bf644b7d3e50256f84f16743d3f20"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.763547 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" event={"ID":"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2","Type":"ContainerStarted","Data":"cc71d2704d0b101c5d222eac45e6460098664f924bfa3979e31217980afdeb90"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.774586 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" event={"ID":"d41d7513-cd63-4320-907a-51d6e48fa9e0","Type":"ContainerStarted","Data":"d70cfe841511e64ccdecc49e9b25981440137af82d6afb946e1ff7c3582809c5"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.779168 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" event={"ID":"02c9b08a-94e3-4606-8fbf-188005bbd87d","Type":"ContainerStarted","Data":"e5e2dc2a71b701e8fb9a43732220e5bf09b7701d3a1fe0bab68814d02b154c90"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.779316 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" event={"ID":"02c9b08a-94e3-4606-8fbf-188005bbd87d","Type":"ContainerStarted","Data":"326b2f07d82e828bfc054c16ee827f59ad70d70ddfb8ecdaad8a932f3a9c89f0"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.779565 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.780811 4970 patch_prober.go:28] interesting pod/console-operator-58897d9998-tqp6b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.780923 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" podUID="02c9b08a-94e3-4606-8fbf-188005bbd87d" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/readyz\": dial tcp 10.217.0.7:8443: connect: connection refused" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.786821 4970 generic.go:334] "Generic (PLEG): container finished" podID="d58246f9-2537-42e2-af7b-8db153b987aa" containerID="04e4a684b3fce17c1464a955fcc30d3d13a8c5386688c5bae088a8764dd239b9" exitCode=0 Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.786900 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" event={"ID":"d58246f9-2537-42e2-af7b-8db153b987aa","Type":"ContainerDied","Data":"04e4a684b3fce17c1464a955fcc30d3d13a8c5386688c5bae088a8764dd239b9"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.790674 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" event={"ID":"e2a8c87a-d4b3-443b-af72-412d3eb74754","Type":"ContainerStarted","Data":"8c1bca6554c9a223b80a5795f571d4f90e314b76f2ab10d1ea434c805a5a4a14"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.790706 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" event={"ID":"e2a8c87a-d4b3-443b-af72-412d3eb74754","Type":"ContainerStarted","Data":"2010f0e542a6358bf087db35e8e7048b14ba4bb477bf2333117d565c2ad02002"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.794476 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" event={"ID":"3339365c-8f70-47e8-9cc4-51f20cf3068e","Type":"ContainerStarted","Data":"fc5731d622a6d2430c44660c4b9cd4e80d1925f68e32da6a7d90597a6a45675e"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.796193 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.807487 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.809553 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.309519353 +0000 UTC m=+144.381370317 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.821459 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7zdf" event={"ID":"200e46f5-be36-4a88-85d0-fb279eba20c5","Type":"ContainerStarted","Data":"b070c2e5b2d5adb56b33749ad1ea9d98b24e503df442932855de4f5f527e9244"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.836542 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" event={"ID":"08f9bfd0-2121-4159-b0aa-41f5cc539aae","Type":"ContainerStarted","Data":"63a5be110cad1d60ce0c48b014e2e04018a6baa74b3a23c696d6abd5337740ce"} Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.846426 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-27726"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.854553 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.883011 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.902742 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.910481 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d"] Sep 30 09:48:50 crc kubenswrapper[4970]: I0930 09:48:50.911649 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:50 crc kubenswrapper[4970]: E0930 09:48:50.913588 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.413575579 +0000 UTC m=+144.485426503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.013390 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.013790 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.513775247 +0000 UTC m=+144.585626181 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.053304 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-6r7rb" podStartSLOduration=124.053272879 podStartE2EDuration="2m4.053272879s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.028526055 +0000 UTC m=+144.100376989" watchObservedRunningTime="2025-09-30 09:48:51.053272879 +0000 UTC m=+144.125123813" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.061153 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k"] Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.068101 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8779368_6bce_4ab6_b3e0_3566175db496.slice/crio-8880a7951bd492586f941b45d7bb83cdd6d7f95c7419eb011a0a10e37d35dd82 WatchSource:0}: Error finding container 8880a7951bd492586f941b45d7bb83cdd6d7f95c7419eb011a0a10e37d35dd82: Status 404 returned error can't find the container with id 8880a7951bd492586f941b45d7bb83cdd6d7f95c7419eb011a0a10e37d35dd82 Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.068705 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684028c5_aab7_4020_8ccd_b1f7b575f59d.slice/crio-f96c540ff3c66b1799abbfae7c0a0a679a83b6bc9339390b2b59288e1e9418c5 WatchSource:0}: Error finding container f96c540ff3c66b1799abbfae7c0a0a679a83b6bc9339390b2b59288e1e9418c5: Status 404 returned error can't find the container with id f96c540ff3c66b1799abbfae7c0a0a679a83b6bc9339390b2b59288e1e9418c5 Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.115709 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.116066 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.616053867 +0000 UTC m=+144.687904801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.153331 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.164772 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-qmlst"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.199406 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-f4xmt"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.221307 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.221591 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.721574571 +0000 UTC m=+144.793425505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.309053 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-mnwkr" podStartSLOduration=123.309034801 podStartE2EDuration="2m3.309034801s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.302484364 +0000 UTC m=+144.374335298" watchObservedRunningTime="2025-09-30 09:48:51.309034801 +0000 UTC m=+144.380885735" Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.312009 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bdf5e2d_b2de_4da5_b6b5_166eada552ae.slice/crio-44b4bc6578566d6583dcea56baf57d270444801545a02890909cb1ff45b73da2 WatchSource:0}: Error finding container 44b4bc6578566d6583dcea56baf57d270444801545a02890909cb1ff45b73da2: Status 404 returned error can't find the container with id 44b4bc6578566d6583dcea56baf57d270444801545a02890909cb1ff45b73da2 Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.324214 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.324890 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdbeb479_93e2_4da6_a9d0_052a9f90ab9c.slice/crio-0e4337c2c54a84b926074c08a29647be8c8b9a5eed6f3ab766275d5c83f28b80 WatchSource:0}: Error finding container 0e4337c2c54a84b926074c08a29647be8c8b9a5eed6f3ab766275d5c83f28b80: Status 404 returned error can't find the container with id 0e4337c2c54a84b926074c08a29647be8c8b9a5eed6f3ab766275d5c83f28b80 Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.325088 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.825076352 +0000 UTC m=+144.896927286 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.345134 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg"] Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.390904 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0edb6aa0_1020_44cf_b155_bb02e5ae4fd4.slice/crio-da5fa8e669f8e430306d27934e9dbf7685c7c7f68e3531d26bbbc49de8023f6d WatchSource:0}: Error finding container da5fa8e669f8e430306d27934e9dbf7685c7c7f68e3531d26bbbc49de8023f6d: Status 404 returned error can't find the container with id da5fa8e669f8e430306d27934e9dbf7685c7c7f68e3531d26bbbc49de8023f6d Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.432739 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.432937 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.932904305 +0000 UTC m=+145.004755239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.435413 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" podStartSLOduration=123.435384699 podStartE2EDuration="2m3.435384699s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.422095718 +0000 UTC m=+144.493946662" watchObservedRunningTime="2025-09-30 09:48:51.435384699 +0000 UTC m=+144.507235633" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.439566 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.440065 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:51.940047308 +0000 UTC m=+145.011898232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.441385 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.464680 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6lmxk" podStartSLOduration=123.464652169 podStartE2EDuration="2m3.464652169s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.463417007 +0000 UTC m=+144.535267971" watchObservedRunningTime="2025-09-30 09:48:51.464652169 +0000 UTC m=+144.536503103" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.482815 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.485075 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.503738 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.518054 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.547627 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.548170 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.048132717 +0000 UTC m=+145.119983651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.549357 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-cs6xx"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.588806 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.604039 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:51 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:51 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:51 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.604403 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.624097 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" podStartSLOduration=123.62396708 podStartE2EDuration="2m3.62396708s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.619633409 +0000 UTC m=+144.691484343" watchObservedRunningTime="2025-09-30 09:48:51.62396708 +0000 UTC m=+144.695818014" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.640494 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-k2nnd"] Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.650616 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7535af89_756e_4e84_b9f3_246296ca252e.slice/crio-4916c6f85c5cc59363c1aa30c9723870c51dd2657a5a6f1749fb05068be2f572 WatchSource:0}: Error finding container 4916c6f85c5cc59363c1aa30c9723870c51dd2657a5a6f1749fb05068be2f572: Status 404 returned error can't find the container with id 4916c6f85c5cc59363c1aa30c9723870c51dd2657a5a6f1749fb05068be2f572 Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.650796 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.651210 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.151194568 +0000 UTC m=+145.223045512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.668952 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2sdhl"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.673747 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-5x9pd" podStartSLOduration=124.673730875 podStartE2EDuration="2m4.673730875s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.668201494 +0000 UTC m=+144.740052428" watchObservedRunningTime="2025-09-30 09:48:51.673730875 +0000 UTC m=+144.745581809" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.695548 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-svmb7"] Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.754151 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.754417 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.254376932 +0000 UTC m=+145.326227866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.755448 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.755967 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.255958122 +0000 UTC m=+145.327809056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.767860 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94e420ce_a1c2_422a_aaba_0553236d7e6a.slice/crio-a885eff5e96c44e4bdf3253fd4c355fffc9a16bfa04b522ed79417b7ced1fd1b WatchSource:0}: Error finding container a885eff5e96c44e4bdf3253fd4c355fffc9a16bfa04b522ed79417b7ced1fd1b: Status 404 returned error can't find the container with id a885eff5e96c44e4bdf3253fd4c355fffc9a16bfa04b522ed79417b7ced1fd1b Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.798051 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod883289d1_ad8e_470a_8779_41fabbbee527.slice/crio-ae9b32ec536733af075d8c914db5d9ff045dd43160ac6f25b32a2347fa634d2e WatchSource:0}: Error finding container ae9b32ec536733af075d8c914db5d9ff045dd43160ac6f25b32a2347fa634d2e: Status 404 returned error can't find the container with id ae9b32ec536733af075d8c914db5d9ff045dd43160ac6f25b32a2347fa634d2e Sep 30 09:48:51 crc kubenswrapper[4970]: W0930 09:48:51.798807 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode980860d_2007_40b5_a3d6_443396650e2d.slice/crio-a7f8fc58352b422a5f13f224f793e86f2b6e3561655dc9723edf310cf773c824 WatchSource:0}: Error finding container a7f8fc58352b422a5f13f224f793e86f2b6e3561655dc9723edf310cf773c824: Status 404 returned error can't find the container with id a7f8fc58352b422a5f13f224f793e86f2b6e3561655dc9723edf310cf773c824 Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.857298 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.857684 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.357662058 +0000 UTC m=+145.429512992 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.860785 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" event={"ID":"b991f6b0-8b44-424d-b082-b753223ffcd5","Type":"ContainerStarted","Data":"d3a516a7294f9625493d55e1cdc7b4aece4eab049b4b71435857cb82345d2c21"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.889218 4970 generic.go:334] "Generic (PLEG): container finished" podID="d41d7513-cd63-4320-907a-51d6e48fa9e0" containerID="fba8f7c7c24622e2469fb58bcd7cc10148253ecb6ae5af2240824a8a5b8666be" exitCode=0 Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.890089 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" event={"ID":"d41d7513-cd63-4320-907a-51d6e48fa9e0","Type":"ContainerDied","Data":"fba8f7c7c24622e2469fb58bcd7cc10148253ecb6ae5af2240824a8a5b8666be"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.921978 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-f7zdf" event={"ID":"200e46f5-be36-4a88-85d0-fb279eba20c5","Type":"ContainerStarted","Data":"4c40bbc789b90793cd3599bdc2e486519f9a5bff146e0a9e573ccf5d912e759d"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.922070 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.925227 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7zdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.925298 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7zdf" podUID="200e46f5-be36-4a88-85d0-fb279eba20c5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.930054 4970 generic.go:334] "Generic (PLEG): container finished" podID="08f9bfd0-2121-4159-b0aa-41f5cc539aae" containerID="479fe2917dae057cc0f5193c71fc8feb89a41915866658237916c2e7a0c67236" exitCode=0 Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.930580 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" event={"ID":"08f9bfd0-2121-4159-b0aa-41f5cc539aae","Type":"ContainerDied","Data":"479fe2917dae057cc0f5193c71fc8feb89a41915866658237916c2e7a0c67236"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.938497 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" event={"ID":"e0b59dab-c4d7-4baa-9811-f29d7b19be0b","Type":"ContainerStarted","Data":"d1a38a966117598fa3de9bf726af97842704811e847e0534a298dc2df7785a92"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.938548 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" event={"ID":"e0b59dab-c4d7-4baa-9811-f29d7b19be0b","Type":"ContainerStarted","Data":"f7780c933a1d373ef2a0798d7117cd1feb2e7520b7b7521fe056a5c3f0880eb9"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.952540 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" event={"ID":"e8779368-6bce-4ab6-b3e0-3566175db496","Type":"ContainerStarted","Data":"22233d445e485897685cec5f778359374ff417b13536dd6077f6e7af45a65ebf"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.952591 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" event={"ID":"e8779368-6bce-4ab6-b3e0-3566175db496","Type":"ContainerStarted","Data":"8880a7951bd492586f941b45d7bb83cdd6d7f95c7419eb011a0a10e37d35dd82"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.960811 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:51 crc kubenswrapper[4970]: E0930 09:48:51.963500 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.463481369 +0000 UTC m=+145.535332303 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.973347 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" event={"ID":"7535af89-756e-4e84-b9f3-246296ca252e","Type":"ContainerStarted","Data":"4916c6f85c5cc59363c1aa30c9723870c51dd2657a5a6f1749fb05068be2f572"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.987450 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-49sh9" podStartSLOduration=124.987426402 podStartE2EDuration="2m4.987426402s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:51.985658767 +0000 UTC m=+145.057509701" watchObservedRunningTime="2025-09-30 09:48:51.987426402 +0000 UTC m=+145.059277336" Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.994842 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" event={"ID":"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2","Type":"ContainerStarted","Data":"81a7b935d6a4e0ed2a530db89469b0eb7cb76c1edca77ece6b1bd634252b30ab"} Sep 30 09:48:51 crc kubenswrapper[4970]: I0930 09:48:51.996677 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" event={"ID":"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4","Type":"ContainerStarted","Data":"da5fa8e669f8e430306d27934e9dbf7685c7c7f68e3531d26bbbc49de8023f6d"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.009549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" event={"ID":"54dbd8a8-c111-4c80-9ab3-84ddc9531458","Type":"ContainerStarted","Data":"88dc789be9312ca9a476593eb18eff066121ce64e1b40adfece7c41679bf69c1"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.017340 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k2nnd" event={"ID":"883289d1-ad8e-470a-8779-41fabbbee527","Type":"ContainerStarted","Data":"ae9b32ec536733af075d8c914db5d9ff045dd43160ac6f25b32a2347fa634d2e"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.023783 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" event={"ID":"2f8b3127-d745-44c3-9170-9ebd73c5f2ea","Type":"ContainerStarted","Data":"0e195afb364feea72a24382361a1f89611276748caadafb7fd5adb17a1ba56b9"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.031812 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" event={"ID":"e80c8be2-53ae-4ddc-ab06-59133d53f4eb","Type":"ContainerStarted","Data":"24709474879df8305e1338fa8b338520ef792901feef0fdc16662e21dd4d3f3f"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.031849 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" event={"ID":"e80c8be2-53ae-4ddc-ab06-59133d53f4eb","Type":"ContainerStarted","Data":"cbda97e4266b949761c35de5416f6776c2c8b1b3b8deba5e03bd15af05698afb"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.039047 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" event={"ID":"d54f4ba0-36ce-4907-b64a-931857c06d30","Type":"ContainerStarted","Data":"bba5dea143ca110e93291bdc4b8090f5861f1fbc63aee6ab25738be20b38a5f8"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.062160 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.066430 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.566404386 +0000 UTC m=+145.638255340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.091408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" event={"ID":"b12d3d55-4a75-467d-ab67-e1b30e673183","Type":"ContainerStarted","Data":"59e620e75703dd3693f3078244c20d1c7865f3e8e6e78b0ce2e2e85c3648740f"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.104915 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" event={"ID":"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e","Type":"ContainerStarted","Data":"6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.105300 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.106533 4970 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-gtdq5 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" start-of-body= Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.106575 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" podUID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.13:6443/healthz\": dial tcp 10.217.0.13:6443: connect: connection refused" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.123614 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" event={"ID":"c6b5e101-ca8e-4284-8302-b01361523ccd","Type":"ContainerStarted","Data":"00ed088ddd900345b0b4197a1a0b3424e76cd07a704721a36883ac9c1da01796"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.129663 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" event={"ID":"8d8a711c-0969-4002-8b2f-84acf31ea060","Type":"ContainerStarted","Data":"166b8ac34622fd3afc09fbf6a57944171552604112c7ea3d01b28196331a7a6f"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.138272 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" event={"ID":"73c082f9-de6d-4efb-970d-8497d39ab890","Type":"ContainerStarted","Data":"aeccc04d6a39394427ea9d9456ee3c22b5f81a4d8964946ca90546463d0a565c"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.138323 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" event={"ID":"73c082f9-de6d-4efb-970d-8497d39ab890","Type":"ContainerStarted","Data":"3536b4b0f552e810d4fe0c1f10ee0c2ea7f659fc0a767fffe7c93e6135460a02"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.168978 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.169869 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.669856447 +0000 UTC m=+145.741707381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.172008 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" event={"ID":"34642044-e265-4f47-8cf4-d97eabb78e01","Type":"ContainerStarted","Data":"1c20993d73ddb8b730dd9f908cdef26753f89e9bd1a95930ca5e8a2e1ecf3c9e"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.190534 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-f7zdf" podStartSLOduration=124.190518366 podStartE2EDuration="2m4.190518366s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.146885468 +0000 UTC m=+145.218736402" watchObservedRunningTime="2025-09-30 09:48:52.190518366 +0000 UTC m=+145.262369300" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.200274 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" event={"ID":"8bdf5e2d-b2de-4da5-b6b5-166eada552ae","Type":"ContainerStarted","Data":"44b4bc6578566d6583dcea56baf57d270444801545a02890909cb1ff45b73da2"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.216454 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-27726" event={"ID":"14645e18-5ae5-40f7-b52f-591a49032bc0","Type":"ContainerStarted","Data":"85c49d47f6955adb3c5e35c0011336abe00ba0867f280e16b254c573b3cead70"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.217369 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.218588 4970 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-27726 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.218627 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-27726" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.242966 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" event={"ID":"d58246f9-2537-42e2-af7b-8db153b987aa","Type":"ContainerStarted","Data":"b4e41404e5f031e930b9d5ae963e80f2fae78e58b378638d5b572ba7fed54c50"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.253555 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" event={"ID":"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6","Type":"ContainerStarted","Data":"589afd8f7cf979ae82b6aaa2ae3f492e318af3d7baea9a617da45378d223d722"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.253601 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" event={"ID":"c98d2b38-fbe9-4f9f-acce-e7e34dc123e6","Type":"ContainerStarted","Data":"537e29f46a34100fbd7f46045b524ce9fe93b29bed9066a35f1a30a7bc2d3be2"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.269367 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" podStartSLOduration=124.269326075 podStartE2EDuration="2m4.269326075s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.261267589 +0000 UTC m=+145.333118533" watchObservedRunningTime="2025-09-30 09:48:52.269326075 +0000 UTC m=+145.341177009" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.277255 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.279275 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.779242219 +0000 UTC m=+145.851093193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.279531 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svmb7" event={"ID":"e980860d-2007-40b5-a3d6-443396650e2d","Type":"ContainerStarted","Data":"a7f8fc58352b422a5f13f224f793e86f2b6e3561655dc9723edf310cf773c824"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.287797 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" event={"ID":"3339365c-8f70-47e8-9cc4-51f20cf3068e","Type":"ContainerStarted","Data":"8793dde4059932e70a72b8a6412874cb25e1cf4d46db5e2a2d6de1fbf77b77d6"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.310243 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" event={"ID":"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c","Type":"ContainerStarted","Data":"0e4337c2c54a84b926074c08a29647be8c8b9a5eed6f3ab766275d5c83f28b80"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.315056 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" podStartSLOduration=125.315042326 podStartE2EDuration="2m5.315042326s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.313451936 +0000 UTC m=+145.385302870" watchObservedRunningTime="2025-09-30 09:48:52.315042326 +0000 UTC m=+145.386893260" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.318194 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tnccw" event={"ID":"f1e25561-aedc-4396-8a32-03e4a401274b","Type":"ContainerStarted","Data":"7895e17de16f2d0eebc4f48b000f85c8c7615f3d8de50e45bf7c7e494ce07b7a"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.325423 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" event={"ID":"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f","Type":"ContainerStarted","Data":"0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.326545 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.342745 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-tp67v" podStartSLOduration=124.342728386 podStartE2EDuration="2m4.342728386s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.342016737 +0000 UTC m=+145.413867671" watchObservedRunningTime="2025-09-30 09:48:52.342728386 +0000 UTC m=+145.414579320" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.343330 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" event={"ID":"a6c612c8-92ec-4052-86ee-a1f340e70b04","Type":"ContainerStarted","Data":"826af10b66f1b20056adf35f468566bfec018d3ba034f5136024dfcd5ef2ca03"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.343411 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" event={"ID":"a6c612c8-92ec-4052-86ee-a1f340e70b04","Type":"ContainerStarted","Data":"de496eb6585b5e5b1f9e13ba4748e65188725cf7145623fbf4d2344f2f44615a"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.347032 4970 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-nmtqv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.347124 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" podUID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.379873 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.381120 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.881086008 +0000 UTC m=+145.952937062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.387863 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-65v4s" podStartSLOduration=124.387835861 podStartE2EDuration="2m4.387835861s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.385923622 +0000 UTC m=+145.457774556" watchObservedRunningTime="2025-09-30 09:48:52.387835861 +0000 UTC m=+145.459686795" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.395380 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c8bs2" event={"ID":"4eac6509-7889-4976-bcc4-bf65486c098f","Type":"ContainerStarted","Data":"b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.428337 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" event={"ID":"f2267d30-75c6-4002-ae56-b623dc6d7e42","Type":"ContainerStarted","Data":"c23772d4aceea6998dc2dbc5952f2f100a3d976d9993e24fd60cf3d44a1d7510"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.428375 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" event={"ID":"f2267d30-75c6-4002-ae56-b623dc6d7e42","Type":"ContainerStarted","Data":"04f46cada775c2d3b928a8695e0350f8f7d91ae6897a538d5658dfbd2284c045"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.450800 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" event={"ID":"684028c5-aab7-4020-8ccd-b1f7b575f59d","Type":"ContainerStarted","Data":"f96c540ff3c66b1799abbfae7c0a0a679a83b6bc9339390b2b59288e1e9418c5"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.463700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" event={"ID":"94e420ce-a1c2-422a-aaba-0553236d7e6a","Type":"ContainerStarted","Data":"a885eff5e96c44e4bdf3253fd4c355fffc9a16bfa04b522ed79417b7ced1fd1b"} Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.470767 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-27726" podStartSLOduration=124.470749025 podStartE2EDuration="2m4.470749025s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.470726164 +0000 UTC m=+145.542577098" watchObservedRunningTime="2025-09-30 09:48:52.470749025 +0000 UTC m=+145.542599959" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.481064 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.482582 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:52.982564557 +0000 UTC m=+146.054415501 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.487502 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-tqp6b" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.528854 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-c8bs2" podStartSLOduration=124.528764471 podStartE2EDuration="2m4.528764471s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.520700414 +0000 UTC m=+145.592551358" watchObservedRunningTime="2025-09-30 09:48:52.528764471 +0000 UTC m=+145.600615405" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.584883 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.587320 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.087302521 +0000 UTC m=+146.159153455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.600588 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" podStartSLOduration=124.60055543 podStartE2EDuration="2m4.60055543s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.594828254 +0000 UTC m=+145.666679188" watchObservedRunningTime="2025-09-30 09:48:52.60055543 +0000 UTC m=+145.672406364" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.601336 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-xdf86" podStartSLOduration=124.60133001 podStartE2EDuration="2m4.60133001s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.562636289 +0000 UTC m=+145.634487223" watchObservedRunningTime="2025-09-30 09:48:52.60133001 +0000 UTC m=+145.673180944" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.618274 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:52 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:52 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:52 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.618341 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.624330 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-ktnvq" podStartSLOduration=124.624308459 podStartE2EDuration="2m4.624308459s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.62318658 +0000 UTC m=+145.695037524" watchObservedRunningTime="2025-09-30 09:48:52.624308459 +0000 UTC m=+145.696159393" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.687063 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.687285 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.187216651 +0000 UTC m=+146.259067585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.689828 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.690220 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.190207867 +0000 UTC m=+146.262058801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.732422 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tnccw" podStartSLOduration=5.732391388 podStartE2EDuration="5.732391388s" podCreationTimestamp="2025-09-30 09:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.670981615 +0000 UTC m=+145.742832549" watchObservedRunningTime="2025-09-30 09:48:52.732391388 +0000 UTC m=+145.804242322" Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.791086 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.791422 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.29140764 +0000 UTC m=+146.363258574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.892068 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.892412 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.392397078 +0000 UTC m=+146.464248012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.993597 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.993749 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.493716704 +0000 UTC m=+146.565567638 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:52 crc kubenswrapper[4970]: I0930 09:48:52.993913 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:52 crc kubenswrapper[4970]: E0930 09:48:52.994358 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.49435023 +0000 UTC m=+146.566201164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.096952 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.097622 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.597598865 +0000 UTC m=+146.669449799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.199137 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.199834 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.699808844 +0000 UTC m=+146.771659778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.302438 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.302896 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.802877635 +0000 UTC m=+146.874728579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.302984 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.303394 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.803364407 +0000 UTC m=+146.875215341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.404546 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.404766 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.904731724 +0000 UTC m=+146.976582658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.404949 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.405358 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:53.9053397 +0000 UTC m=+146.977190634 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.505714 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.506443 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.00641993 +0000 UTC m=+147.078270864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.507023 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.507505 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.007497077 +0000 UTC m=+147.079348011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.515309 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" event={"ID":"8bdf5e2d-b2de-4da5-b6b5-166eada552ae","Type":"ContainerStarted","Data":"408dd2138ad6453a42ca28666432f50022a3c0ccabde14c92feb2e972771321f"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.517675 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" event={"ID":"e8779368-6bce-4ab6-b3e0-3566175db496","Type":"ContainerStarted","Data":"3783c74db5032a11b4619833258ea917fe8e60dff12b5049729e3e8df6e6d6e1"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.558023 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" podStartSLOduration=125.557977331 podStartE2EDuration="2m5.557977331s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:52.778153021 +0000 UTC m=+145.850003955" watchObservedRunningTime="2025-09-30 09:48:53.557977331 +0000 UTC m=+146.629828265" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.560095 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-f4xmt" podStartSLOduration=125.560081094 podStartE2EDuration="2m5.560081094s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.556148944 +0000 UTC m=+146.627999878" watchObservedRunningTime="2025-09-30 09:48:53.560081094 +0000 UTC m=+146.631932038" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.562470 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-27726" event={"ID":"14645e18-5ae5-40f7-b52f-591a49032bc0","Type":"ContainerStarted","Data":"c7b74aa6640f1403c6775a94d4a7e9f12ba07694385c2573c19f6e9c1ececd91"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.576764 4970 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-27726 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.576872 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-27726" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.587787 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6dc7k" event={"ID":"b12d3d55-4a75-467d-ab67-e1b30e673183","Type":"ContainerStarted","Data":"0dbce15e0a9057a3d662a122e4fa49c1f1f6847dd6a8b74b98107946e67ff2b4"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.591771 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" event={"ID":"b991f6b0-8b44-424d-b082-b753223ffcd5","Type":"ContainerStarted","Data":"f704910a5c4f08124f1a1883b4a863cdd1d3e70082f4b523085e779ac72e2069"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.592570 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.593443 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:53 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:53 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:53 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.593480 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.593900 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" event={"ID":"34642044-e265-4f47-8cf4-d97eabb78e01","Type":"ContainerStarted","Data":"ca46cce6a7d6dd2fab4595bb58dc1a618c3fb6d1f8c7edc7082cdd0cfbcb296c"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.593928 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" event={"ID":"34642044-e265-4f47-8cf4-d97eabb78e01","Type":"ContainerStarted","Data":"13e7d748576e09f01c9fc0454178ed0ce9434ad6ad2e05d80dd44a0ba4b6ee40"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.598236 4970 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v7cmd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.598312 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" podUID="b991f6b0-8b44-424d-b082-b753223ffcd5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.609168 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-rngv8" podStartSLOduration=125.609146742 podStartE2EDuration="2m5.609146742s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.607642993 +0000 UTC m=+146.679493927" watchObservedRunningTime="2025-09-30 09:48:53.609146742 +0000 UTC m=+146.680997676" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.610288 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.611730 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.111713907 +0000 UTC m=+147.183564841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.622337 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svmb7" event={"ID":"e980860d-2007-40b5-a3d6-443396650e2d","Type":"ContainerStarted","Data":"584a4de20ce819947cc65f7cae75a83f6b0e14b8f8c225b8ca50615d8dcd4331"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.639130 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tnccw" event={"ID":"f1e25561-aedc-4396-8a32-03e4a401274b","Type":"ContainerStarted","Data":"983b40a228a1171ffd9656746321045069c8b523ee7e5bd2ce0d3c636bd431fa"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.666091 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" podStartSLOduration=125.66605979 podStartE2EDuration="2m5.66605979s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.663577596 +0000 UTC m=+146.735428530" watchObservedRunningTime="2025-09-30 09:48:53.66605979 +0000 UTC m=+146.737910724" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.714131 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" event={"ID":"3339365c-8f70-47e8-9cc4-51f20cf3068e","Type":"ContainerStarted","Data":"0ce50125fb189582f1a346a55b8aa77a448a5bfabf92814767901e5da3231100"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.714868 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-x2cjp" podStartSLOduration=125.71484038 podStartE2EDuration="2m5.71484038s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.706721642 +0000 UTC m=+146.778572576" watchObservedRunningTime="2025-09-30 09:48:53.71484038 +0000 UTC m=+146.786691314" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.715010 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.715858 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.215843765 +0000 UTC m=+147.287694699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.775213 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" event={"ID":"a6c612c8-92ec-4052-86ee-a1f340e70b04","Type":"ContainerStarted","Data":"5be78f6fe6f5cd236b5468e0c634f314fc08d9050402d3f1b731f1364a51276a"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.811396 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" event={"ID":"2f8b3127-d745-44c3-9170-9ebd73c5f2ea","Type":"ContainerStarted","Data":"671714870ec247132d5d40467d985b3834d08359a6d89d0c17ce1a5377f4a923"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.812495 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.816208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.817891 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.317868609 +0000 UTC m=+147.389719543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.829089 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-mdq8d" event={"ID":"684028c5-aab7-4020-8ccd-b1f7b575f59d","Type":"ContainerStarted","Data":"4ffc21e89e2c46885671155a50f592dcb5ecb06f21356ff33fa222a48897829b"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.833544 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-fr5b7" podStartSLOduration=125.83351956999999 podStartE2EDuration="2m5.83351957s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.746536762 +0000 UTC m=+146.818387696" watchObservedRunningTime="2025-09-30 09:48:53.83351957 +0000 UTC m=+146.905370494" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.834504 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-mml8f" podStartSLOduration=125.834499045 podStartE2EDuration="2m5.834499045s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.828959333 +0000 UTC m=+146.900810267" watchObservedRunningTime="2025-09-30 09:48:53.834499045 +0000 UTC m=+146.906349969" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.837166 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.852596 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" event={"ID":"c6b5e101-ca8e-4284-8302-b01361523ccd","Type":"ContainerStarted","Data":"4007e92a805f8070d3ae493e3ce08ab70c736df31a69a241d0936167e7f8d15f"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.853611 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.862746 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" event={"ID":"8d8a711c-0969-4002-8b2f-84acf31ea060","Type":"ContainerStarted","Data":"5102173a2f2499a17062303879c3267041c076024c6bf5b82ddcb63bd64d352b"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.887056 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-k2nnd" event={"ID":"883289d1-ad8e-470a-8779-41fabbbee527","Type":"ContainerStarted","Data":"4594baffa84ac4f23ac59f9ad6ac781254d04ccc70214a1ce9e39486afafe75e"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.906441 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" event={"ID":"54dbd8a8-c111-4c80-9ab3-84ddc9531458","Type":"ContainerStarted","Data":"869c311c9dcffeccaafc1695496c9be6ef0a0e8bf2c81d39dc65916d8650a546"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.907146 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.938349 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:53 crc kubenswrapper[4970]: E0930 09:48:53.938972 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.438943411 +0000 UTC m=+147.510794345 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.956872 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" event={"ID":"08f9bfd0-2121-4159-b0aa-41f5cc539aae","Type":"ContainerStarted","Data":"301ef0ef77a4ea828defcaf19724eec2813e9c66a23dd8377a010ebabf06d602"} Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.957676 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.963265 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" Sep 30 09:48:53 crc kubenswrapper[4970]: I0930 09:48:53.989184 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" event={"ID":"d54f4ba0-36ce-4907-b64a-931857c06d30","Type":"ContainerStarted","Data":"a8cf0383722fe1ba869537bd0e8441d2a00cadc2f7df1bf3f08c6ca41af4f683"} Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.001614 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vd6gv" podStartSLOduration=126.001587836 podStartE2EDuration="2m6.001587836s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.905947876 +0000 UTC m=+146.977798810" watchObservedRunningTime="2025-09-30 09:48:54.001587836 +0000 UTC m=+147.073438770" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.009967 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" podStartSLOduration=126.00994288 podStartE2EDuration="2m6.00994288s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:53.982805315 +0000 UTC m=+147.054656249" watchObservedRunningTime="2025-09-30 09:48:54.00994288 +0000 UTC m=+147.081793814" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.032860 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2wvg" podStartSLOduration=126.032829827 podStartE2EDuration="2m6.032829827s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.031577105 +0000 UTC m=+147.103428039" watchObservedRunningTime="2025-09-30 09:48:54.032829827 +0000 UTC m=+147.104680751" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.042347 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.045050 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.545025239 +0000 UTC m=+147.616876173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.051798 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" event={"ID":"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4","Type":"ContainerStarted","Data":"e5ff8b189dd3064f0ce3e709d62d29e1b8886988a3e06cc3dd7a0e5014e19749"} Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.089864 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" event={"ID":"fdbeb479-93e2-4da6-a9d0-052a9f90ab9c","Type":"ContainerStarted","Data":"318d1efe1da0333d2067d2efc84082f0c8f532751e422cde68b7e9b90a51c790"} Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.116226 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-k2nnd" podStartSLOduration=7.116200473 podStartE2EDuration="7.116200473s" podCreationTimestamp="2025-09-30 09:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.115052633 +0000 UTC m=+147.186903567" watchObservedRunningTime="2025-09-30 09:48:54.116200473 +0000 UTC m=+147.188051407" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.123167 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" event={"ID":"f2267d30-75c6-4002-ae56-b623dc6d7e42","Type":"ContainerStarted","Data":"3e288754b240a2b2c3e660184f4db8d24ede8fcff08277a40f4b0b2b514f5bf2"} Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.144274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.146291 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.646271843 +0000 UTC m=+147.718122777 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.155173 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" event={"ID":"7535af89-756e-4e84-b9f3-246296ca252e","Type":"ContainerStarted","Data":"698f40eeefc86b59c0ffe4a50850243c02f1109302e2cfb18448e76237bb76de"} Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.159176 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7zdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.159242 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7zdf" podUID="200e46f5-be36-4a88-85d0-fb279eba20c5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.173467 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.213465 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-cs6xx" podStartSLOduration=126.213436484 podStartE2EDuration="2m6.213436484s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.15863903 +0000 UTC m=+147.230489964" watchObservedRunningTime="2025-09-30 09:48:54.213436484 +0000 UTC m=+147.285287428" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.215121 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hsstc" podStartSLOduration=126.215113507 podStartE2EDuration="2m6.215113507s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.209276298 +0000 UTC m=+147.281127232" watchObservedRunningTime="2025-09-30 09:48:54.215113507 +0000 UTC m=+147.286964431" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.255537 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.256975 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.756954049 +0000 UTC m=+147.828804983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.331260 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-rhdkf" podStartSLOduration=126.331234862 podStartE2EDuration="2m6.331234862s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.317744876 +0000 UTC m=+147.389595810" watchObservedRunningTime="2025-09-30 09:48:54.331234862 +0000 UTC m=+147.403085796" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.333236 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" podStartSLOduration=126.333229653 podStartE2EDuration="2m6.333229653s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.254779363 +0000 UTC m=+147.326630297" watchObservedRunningTime="2025-09-30 09:48:54.333229653 +0000 UTC m=+147.405080587" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.359130 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.359631 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.859612239 +0000 UTC m=+147.931463173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.381432 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.425384 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-b78lb" podStartSLOduration=126.425356344 podStartE2EDuration="2m6.425356344s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.424519772 +0000 UTC m=+147.496370716" watchObservedRunningTime="2025-09-30 09:48:54.425356344 +0000 UTC m=+147.497207278" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.460741 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.461419 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:54.961396377 +0000 UTC m=+148.033247311 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.563141 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.563685 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.063669347 +0000 UTC m=+148.135520281 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.595178 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:54 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:54 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:54 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.595265 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.664542 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.664813 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.164759837 +0000 UTC m=+148.236610771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.665185 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.665627 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.165610549 +0000 UTC m=+148.237461483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.721778 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" podStartSLOduration=126.721749287 podStartE2EDuration="2m6.721749287s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:54.584562342 +0000 UTC m=+147.656413296" watchObservedRunningTime="2025-09-30 09:48:54.721749287 +0000 UTC m=+147.793600221" Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.766551 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.766806 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.266770561 +0000 UTC m=+148.338621495 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.767198 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.767652 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.267634333 +0000 UTC m=+148.339485267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.869391 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.869649 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.369593915 +0000 UTC m=+148.441444849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.869950 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.870508 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.370501338 +0000 UTC m=+148.442352272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.972076 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.972478 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.472403229 +0000 UTC m=+148.544254173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:54 crc kubenswrapper[4970]: I0930 09:48:54.973051 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:54 crc kubenswrapper[4970]: E0930 09:48:54.973539 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.473510558 +0000 UTC m=+148.545361492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.074021 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.074500 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.574473914 +0000 UTC m=+148.646324849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.103726 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gjvrm"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.104951 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.119054 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.131306 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjvrm"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.174472 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" event={"ID":"d41d7513-cd63-4320-907a-51d6e48fa9e0","Type":"ContainerStarted","Data":"d20b8eca6be7e0784d398a43cea04f1b71300271e92e2d52d33b4023e666ad87"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.175438 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-utilities\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.175821 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-catalog-content\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.175850 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n67j7\" (UniqueName: \"kubernetes.io/projected/a516da21-a8ba-423e-85ae-49cb3383e933-kube-api-access-n67j7\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.175896 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.176271 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.676258902 +0000 UTC m=+148.748109836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.193759 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-svmb7" event={"ID":"e980860d-2007-40b5-a3d6-443396650e2d","Type":"ContainerStarted","Data":"7db7c604a9be20ade5a3b6414dd1a6f02640008e04c6232408c394836d2067b3"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.194151 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-svmb7" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.209857 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" event={"ID":"d58246f9-2537-42e2-af7b-8db153b987aa","Type":"ContainerStarted","Data":"127fe7edc0b951c22cf812e502c50d929b49b0f23e7ab3f58b869bf789fb85e5"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.233894 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" event={"ID":"0edb6aa0-1020-44cf-b155-bb02e5ae4fd4","Type":"ContainerStarted","Data":"05c5e495d04b5d4931dd4d74fff597b8263ee9b9edf5a01303e4ecefb9b2f829"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.256625 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" event={"ID":"c6b5e101-ca8e-4284-8302-b01361523ccd","Type":"ContainerStarted","Data":"d861b5ff6a3103dbdb96caf38b0dd71ee13302889cdd56596d3428fd9a03bb9c"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.261217 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" podStartSLOduration=127.261193388 podStartE2EDuration="2m7.261193388s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:55.260246424 +0000 UTC m=+148.332097358" watchObservedRunningTime="2025-09-30 09:48:55.261193388 +0000 UTC m=+148.333044322" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.280936 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.281540 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-catalog-content\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.281648 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n67j7\" (UniqueName: \"kubernetes.io/projected/a516da21-a8ba-423e-85ae-49cb3383e933-kube-api-access-n67j7\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.285229 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.785187783 +0000 UTC m=+148.857038717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.285904 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-utilities\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.286481 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-catalog-content\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.287045 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-utilities\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.306265 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" event={"ID":"f3224bd1-c3c3-434c-9549-2b6e7a20f9a2","Type":"ContainerStarted","Data":"8aa691c094334eb2893fd8683273b5fce2f8b4f37c19897c0114755b8179163e"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.328145 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-svmb7" podStartSLOduration=8.328118473 podStartE2EDuration="8.328118473s" podCreationTimestamp="2025-09-30 09:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:55.325510006 +0000 UTC m=+148.397360940" watchObservedRunningTime="2025-09-30 09:48:55.328118473 +0000 UTC m=+148.399969407" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.336534 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" event={"ID":"94e420ce-a1c2-422a-aaba-0553236d7e6a","Type":"ContainerStarted","Data":"e8146c36bb4e2ac0bf07e0984e9240268596aeb04e22fcd4210738b11a25d100"} Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.350230 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.365884 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n67j7\" (UniqueName: \"kubernetes.io/projected/a516da21-a8ba-423e-85ae-49cb3383e933-kube-api-access-n67j7\") pod \"community-operators-gjvrm\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.390578 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.390741 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" podStartSLOduration=128.390710927 podStartE2EDuration="2m8.390710927s" podCreationTimestamp="2025-09-30 09:46:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:55.389306731 +0000 UTC m=+148.461157655" watchObservedRunningTime="2025-09-30 09:48:55.390710927 +0000 UTC m=+148.462561861" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.392393 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:55.89237858 +0000 UTC m=+148.964229514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.424760 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.500281 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.502498 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-qmlst" podStartSLOduration=127.50247031 podStartE2EDuration="2m7.50247031s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:55.436134771 +0000 UTC m=+148.507985705" watchObservedRunningTime="2025-09-30 09:48:55.50247031 +0000 UTC m=+148.574321274" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.504508 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.004483002 +0000 UTC m=+149.076334096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.521095 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5f7bk"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.522902 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.536761 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5f7bk"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.554621 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-9mqqn" podStartSLOduration=127.554587385 podStartE2EDuration="2m7.554587385s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:55.55361186 +0000 UTC m=+148.625462794" watchObservedRunningTime="2025-09-30 09:48:55.554587385 +0000 UTC m=+148.626438319" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.606771 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrcp\" (UniqueName: \"kubernetes.io/projected/feeba15d-0ba3-40c7-9ddb-cbb90c803826-kube-api-access-xgrcp\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.606874 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.606902 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-catalog-content\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.606952 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-utilities\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.607426 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.107409369 +0000 UTC m=+149.179260293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.635582 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:55 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:55 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:55 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.636142 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.699757 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rrdxh"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.701027 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.706899 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.711511 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.711905 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-catalog-content\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.712004 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-utilities\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.712078 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgrcp\" (UniqueName: \"kubernetes.io/projected/feeba15d-0ba3-40c7-9ddb-cbb90c803826-kube-api-access-xgrcp\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.712630 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.212602804 +0000 UTC m=+149.284453738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.713215 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-catalog-content\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.713661 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-utilities\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.719356 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrdxh"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.792606 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgrcp\" (UniqueName: \"kubernetes.io/projected/feeba15d-0ba3-40c7-9ddb-cbb90c803826-kube-api-access-xgrcp\") pod \"community-operators-5f7bk\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.816811 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rtcw\" (UniqueName: \"kubernetes.io/projected/aa02b988-0645-4466-a5bc-9c99033fdcdc-kube-api-access-5rtcw\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.816915 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-catalog-content\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.816964 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-utilities\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.817029 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.817372 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.317356008 +0000 UTC m=+149.389206942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.871485 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2768z"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.872416 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.907383 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2768z"] Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.918702 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.919078 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.419048063 +0000 UTC m=+149.490898997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.919274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-catalog-content\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.919406 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-utilities\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.919473 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.919591 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rtcw\" (UniqueName: \"kubernetes.io/projected/aa02b988-0645-4466-a5bc-9c99033fdcdc-kube-api-access-5rtcw\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.920264 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-catalog-content\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.920464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-utilities\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:55 crc kubenswrapper[4970]: E0930 09:48:55.920694 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.420687045 +0000 UTC m=+149.492537979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:55 crc kubenswrapper[4970]: I0930 09:48:55.957822 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.019282 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rtcw\" (UniqueName: \"kubernetes.io/projected/aa02b988-0645-4466-a5bc-9c99033fdcdc-kube-api-access-5rtcw\") pod \"certified-operators-rrdxh\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.021118 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.021447 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqsk9\" (UniqueName: \"kubernetes.io/projected/0d947653-6517-44e3-8673-5c0bcc7aebae-kube-api-access-vqsk9\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.021526 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-catalog-content\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.021585 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-utilities\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.021739 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.521713903 +0000 UTC m=+149.593564837 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.035753 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.125234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-utilities\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.125338 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.125371 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqsk9\" (UniqueName: \"kubernetes.io/projected/0d947653-6517-44e3-8673-5c0bcc7aebae-kube-api-access-vqsk9\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.125409 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-catalog-content\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.126104 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-catalog-content\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.126410 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-utilities\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.126802 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.626783285 +0000 UTC m=+149.698634219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.183870 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqsk9\" (UniqueName: \"kubernetes.io/projected/0d947653-6517-44e3-8673-5c0bcc7aebae-kube-api-access-vqsk9\") pod \"certified-operators-2768z\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.197441 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.226439 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.226719 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.726699165 +0000 UTC m=+149.798550099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.329578 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.330435 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.830417872 +0000 UTC m=+149.902268806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.343661 4970 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-v7cmd container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.343736 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" podUID="b991f6b0-8b44-424d-b082-b753223ffcd5" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.431771 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" event={"ID":"94e420ce-a1c2-422a-aaba-0553236d7e6a","Type":"ContainerStarted","Data":"013f9477d7a73b104a86e1514cf2f0c7288f4179d587ed6daa038466ece96ffe"} Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.434503 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.434609 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.934585711 +0000 UTC m=+150.006436645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.436133 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.436535 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:56.93651708 +0000 UTC m=+150.008368014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.450340 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-v7cmd" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.538841 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.539066 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.039040587 +0000 UTC m=+150.110891521 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.540239 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.547890 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.047870333 +0000 UTC m=+150.119721267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.557083 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gjvrm"] Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.602183 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:56 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:56 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:56 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.602241 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.642757 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.643213 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.143196396 +0000 UTC m=+150.215047330 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.726548 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5f7bk"] Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.746557 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.747394 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.247379855 +0000 UTC m=+150.319230789 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.795292 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cm6p7" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.847849 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.848299 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.848345 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.848389 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.848411 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.851070 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.853422 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.353374811 +0000 UTC m=+150.425225935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.871753 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.873351 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.875071 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.921426 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rrdxh"] Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.952875 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:56 crc kubenswrapper[4970]: E0930 09:48:56.953615 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.453603649 +0000 UTC m=+150.525454583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:56 crc kubenswrapper[4970]: I0930 09:48:56.987442 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.004474 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.015316 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.060999 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.061541 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.561517304 +0000 UTC m=+150.633368238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.164516 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.165012 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.664970504 +0000 UTC m=+150.736821438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.265535 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.265945 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.765925841 +0000 UTC m=+150.837776765 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.370890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.371610 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.871598338 +0000 UTC m=+150.943449272 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.381755 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2768z"] Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.457206 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rg42w"] Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.461127 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.475518 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.475716 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.975692305 +0000 UTC m=+151.047543239 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.475834 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.476249 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:57.976235089 +0000 UTC m=+151.048086023 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.510098 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg42w"] Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.534453 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.551078 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrdxh" event={"ID":"aa02b988-0645-4466-a5bc-9c99033fdcdc","Type":"ContainerStarted","Data":"4c9bb8a4a72058caa8e61fc50ee7d66c2e1783e13e2043fd08b9d09034a9dbe1"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.562552 4970 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.578959 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.579886 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb8r7\" (UniqueName: \"kubernetes.io/projected/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-kube-api-access-cb8r7\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.580031 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-catalog-content\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.580063 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-utilities\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.580289 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:58.080259754 +0000 UTC m=+151.152110688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.603014 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:57 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:57 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:57 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.613594 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.618750 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" event={"ID":"94e420ce-a1c2-422a-aaba-0553236d7e6a","Type":"ContainerStarted","Data":"2eebdd630aaba346f5a198e78b72bddab7c286e0ef80d5dcefd5ce51401e3fbb"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.647423 4970 generic.go:334] "Generic (PLEG): container finished" podID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerID="0bca685823d0172f0c7d9eedaaacc38f65d1acb41cd7be2807845c1c39414a43" exitCode=0 Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.647564 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f7bk" event={"ID":"feeba15d-0ba3-40c7-9ddb-cbb90c803826","Type":"ContainerDied","Data":"0bca685823d0172f0c7d9eedaaacc38f65d1acb41cd7be2807845c1c39414a43"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.647607 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f7bk" event={"ID":"feeba15d-0ba3-40c7-9ddb-cbb90c803826","Type":"ContainerStarted","Data":"a33e4917f30511844f9380918f28173e5a217b8373a05e0e002724fdbc4b588e"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.682780 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.682864 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb8r7\" (UniqueName: \"kubernetes.io/projected/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-kube-api-access-cb8r7\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.682947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-catalog-content\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.682971 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-utilities\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.683542 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-utilities\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.683732 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.683918 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:58.183889449 +0000 UTC m=+151.255740383 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.684305 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-catalog-content\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.713444 4970 generic.go:334] "Generic (PLEG): container finished" podID="a516da21-a8ba-423e-85ae-49cb3383e933" containerID="12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8" exitCode=0 Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.724385 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjvrm" event={"ID":"a516da21-a8ba-423e-85ae-49cb3383e933","Type":"ContainerDied","Data":"12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.724434 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjvrm" event={"ID":"a516da21-a8ba-423e-85ae-49cb3383e933","Type":"ContainerStarted","Data":"adac14ad5537208dc57ffc8b51e5566e3c384d9f1731e20f297765aa584afd81"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.754237 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2768z" event={"ID":"0d947653-6517-44e3-8673-5c0bcc7aebae","Type":"ContainerStarted","Data":"689c96d8f4e0ead2514c0ba9b738e9b63937b9b1fe763a728aa5e29f181d8ff0"} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.762847 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb8r7\" (UniqueName: \"kubernetes.io/projected/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-kube-api-access-cb8r7\") pod \"redhat-marketplace-rg42w\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.786719 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.790168 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-09-30 09:48:58.290125511 +0000 UTC m=+151.361976565 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.790403 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: E0930 09:48:57.790928 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-09-30 09:48:58.290909601 +0000 UTC m=+151.362760535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-vjppp" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.853454 4970 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-09-30T09:48:57.562588422Z","Handler":null,"Name":""} Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.858313 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.879732 4970 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.879799 4970 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.892217 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.900861 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.908252 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rdx4x"] Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.911036 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.913192 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.929952 4970 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.930117 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:57 crc kubenswrapper[4970]: I0930 09:48:57.992613 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdx4x"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.013254 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4579\" (UniqueName: \"kubernetes.io/projected/90331118-d5f2-4547-84b6-2aaf229316f0-kube-api-access-q4579\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.013800 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-catalog-content\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.014083 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-utilities\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.079733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-vjppp\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.115196 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4579\" (UniqueName: \"kubernetes.io/projected/90331118-d5f2-4547-84b6-2aaf229316f0-kube-api-access-q4579\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.115271 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-catalog-content\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.115296 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-utilities\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.115774 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-utilities\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.116041 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-catalog-content\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.128070 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.144658 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4579\" (UniqueName: \"kubernetes.io/projected/90331118-d5f2-4547-84b6-2aaf229316f0-kube-api-access-q4579\") pod \"redhat-marketplace-rdx4x\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.308419 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.382536 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg42w"] Sep 30 09:48:58 crc kubenswrapper[4970]: W0930 09:48:58.393682 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8140e36b_113c_4dbb_982d_4f94ec7c0a5f.slice/crio-66c64563b323311592c2ae1ae9c677c979692a48fcb58022be65b5a9b7794fd7 WatchSource:0}: Error finding container 66c64563b323311592c2ae1ae9c677c979692a48fcb58022be65b5a9b7794fd7: Status 404 returned error can't find the container with id 66c64563b323311592c2ae1ae9c677c979692a48fcb58022be65b5a9b7794fd7 Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.483023 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hlzvg"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.484613 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.497741 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.503204 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlzvg"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.512184 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.514443 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.514593 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.516636 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vjppp"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.518862 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.521876 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.539055 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afde4377-4f82-4586-913d-5238cb5f9c05-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.539686 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afde4377-4f82-4586-913d-5238cb5f9c05-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.539795 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-catalog-content\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.539840 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-utilities\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.539871 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tszq6\" (UniqueName: \"kubernetes.io/projected/c3b26d6c-14bb-4572-87c8-6da4476f88a4-kube-api-access-tszq6\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.598151 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:58 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:58 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:58 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.598240 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.642431 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-catalog-content\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.642546 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-utilities\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.642621 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tszq6\" (UniqueName: \"kubernetes.io/projected/c3b26d6c-14bb-4572-87c8-6da4476f88a4-kube-api-access-tszq6\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.642741 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afde4377-4f82-4586-913d-5238cb5f9c05-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.642874 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afde4377-4f82-4586-913d-5238cb5f9c05-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.644127 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-catalog-content\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.644277 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afde4377-4f82-4586-913d-5238cb5f9c05-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.644828 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-utilities\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.677661 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdx4x"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.716106 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tszq6\" (UniqueName: \"kubernetes.io/projected/c3b26d6c-14bb-4572-87c8-6da4476f88a4-kube-api-access-tszq6\") pod \"redhat-operators-hlzvg\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.717564 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afde4377-4f82-4586-913d-5238cb5f9c05-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: E0930 09:48:58.741696 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8140e36b_113c_4dbb_982d_4f94ec7c0a5f.slice/crio-7663d250ec3a52f1c802941aa6982362e0ce1df763bda7a70187fc136f766e09.scope\": RecentStats: unable to find data in memory cache]" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.801874 4970 generic.go:334] "Generic (PLEG): container finished" podID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerID="7663d250ec3a52f1c802941aa6982362e0ce1df763bda7a70187fc136f766e09" exitCode=0 Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.802027 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg42w" event={"ID":"8140e36b-113c-4dbb-982d-4f94ec7c0a5f","Type":"ContainerDied","Data":"7663d250ec3a52f1c802941aa6982362e0ce1df763bda7a70187fc136f766e09"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.802099 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg42w" event={"ID":"8140e36b-113c-4dbb-982d-4f94ec7c0a5f","Type":"ContainerStarted","Data":"66c64563b323311592c2ae1ae9c677c979692a48fcb58022be65b5a9b7794fd7"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.805947 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdx4x" event={"ID":"90331118-d5f2-4547-84b6-2aaf229316f0","Type":"ContainerStarted","Data":"b70bfe8a3da17f3546a023922df2b69af1dc827af58d032912e349cbe1139171"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.809780 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99ecd638ba1df1c611df4e83dc62c1d6accaaa1e435e7354601b8eccfeb0f2c6"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.809839 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"7f6c42325609f2eb34c44fa664016f7c0f958f18dc593574db597a79c2ccbc9c"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.810113 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.814684 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"9695ee3d998d135a358f47b4796bca299a492d2ada9794742aef1822198b557f"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.814750 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f7560ff1bde95153f740a9c05b186adef95164533d38d4876d8999f9cb3543c6"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.820398 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" event={"ID":"0f2c73d3-00d3-491e-8050-fe9f69126993","Type":"ContainerStarted","Data":"c950827431aba5330225a2efc6400e9a9dc2e0758ad5009a5d81eaca15e49cea"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.823095 4970 generic.go:334] "Generic (PLEG): container finished" podID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerID="f63dc99b44c9cfd8d631c9844dd7061c190b1d5feee606a8440cac1e81285951" exitCode=0 Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.823165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrdxh" event={"ID":"aa02b988-0645-4466-a5bc-9c99033fdcdc","Type":"ContainerDied","Data":"f63dc99b44c9cfd8d631c9844dd7061c190b1d5feee606a8440cac1e81285951"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.842433 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.851265 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.862362 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pwvxr"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.863409 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.877867 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" event={"ID":"94e420ce-a1c2-422a-aaba-0553236d7e6a","Type":"ContainerStarted","Data":"65ba6103c55276ae276019bee28247e160e3e4e666c6afced77f359671c53cd8"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.886768 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fdf4a6a776861726988e47b3fbc74fa52e7e24461afadcc589bd7d3862000b66"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.886820 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a7ccd42a1324fde9247453d9c3044e125b15405158a29e815ab1f56b3115281a"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.891725 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwvxr"] Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.892042 4970 generic.go:334] "Generic (PLEG): container finished" podID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerID="3ca9536830501148160df89bf28a4d81a247974ff112ca94b0d7d6f5dd37cf32" exitCode=0 Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.892152 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2768z" event={"ID":"0d947653-6517-44e3-8673-5c0bcc7aebae","Type":"ContainerDied","Data":"3ca9536830501148160df89bf28a4d81a247974ff112ca94b0d7d6f5dd37cf32"} Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.960867 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-catalog-content\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.960964 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-utilities\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:58 crc kubenswrapper[4970]: I0930 09:48:58.961042 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6ggg\" (UniqueName: \"kubernetes.io/projected/e9332522-1cef-496d-a3f4-eaa0514c5b81-kube-api-access-w6ggg\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.063259 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-catalog-content\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.063395 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-utilities\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.063452 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6ggg\" (UniqueName: \"kubernetes.io/projected/e9332522-1cef-496d-a3f4-eaa0514c5b81-kube-api-access-w6ggg\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.064474 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-utilities\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.071243 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-catalog-content\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.104333 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6ggg\" (UniqueName: \"kubernetes.io/projected/e9332522-1cef-496d-a3f4-eaa0514c5b81-kube-api-access-w6ggg\") pod \"redhat-operators-pwvxr\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.190914 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.190977 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.198723 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.222746 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2sdhl" podStartSLOduration=12.222728746 podStartE2EDuration="12.222728746s" podCreationTimestamp="2025-09-30 09:48:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:59.099453537 +0000 UTC m=+152.171304471" watchObservedRunningTime="2025-09-30 09:48:59.222728746 +0000 UTC m=+152.294579680" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.239579 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.268947 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hlzvg"] Sep 30 09:48:59 crc kubenswrapper[4970]: W0930 09:48:59.289598 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3b26d6c_14bb_4572_87c8_6da4476f88a4.slice/crio-a04c8ae3a4a6d408b89172cbce2c3ff13898e3d06e1468741aff0fa3fec61443 WatchSource:0}: Error finding container a04c8ae3a4a6d408b89172cbce2c3ff13898e3d06e1468741aff0fa3fec61443: Status 404 returned error can't find the container with id a04c8ae3a4a6d408b89172cbce2c3ff13898e3d06e1468741aff0fa3fec61443 Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.301031 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.328012 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7zdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.328076 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7zdf" podUID="200e46f5-be36-4a88-85d0-fb279eba20c5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.328712 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7zdf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.329260 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7zdf" podUID="200e46f5-be36-4a88-85d0-fb279eba20c5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.348765 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.349364 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.356448 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.394165 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.394231 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.399022 4970 patch_prober.go:28] interesting pod/console-f9d7485db-c8bs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.399074 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c8bs2" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.555653 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pwvxr"] Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.589184 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.601900 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:48:59 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:48:59 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:48:59 crc kubenswrapper[4970]: healthz check failed Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.601974 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.680638 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.918869 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerStarted","Data":"0ea2254f547f8e4eda0f27b47dbd42ebeb467c6762330c7ce9169ccda904d5ff"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.919960 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerStarted","Data":"fac8cb3cec98057c739170324fb21d79719599758a703e6f7b646421d45db9bf"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.933063 4970 generic.go:334] "Generic (PLEG): container finished" podID="90331118-d5f2-4547-84b6-2aaf229316f0" containerID="136bd80c7700147705c9cbb3b6c15a94c81cec134b7fcca1d010de3f986ab9ff" exitCode=0 Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.933202 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdx4x" event={"ID":"90331118-d5f2-4547-84b6-2aaf229316f0","Type":"ContainerDied","Data":"136bd80c7700147705c9cbb3b6c15a94c81cec134b7fcca1d010de3f986ab9ff"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.955166 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" event={"ID":"0f2c73d3-00d3-491e-8050-fe9f69126993","Type":"ContainerStarted","Data":"afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.955822 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.968598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"afde4377-4f82-4586-913d-5238cb5f9c05","Type":"ContainerStarted","Data":"fe33b32626c45f2f6cf53dd0f5e4bc2316ed762000ccf9ca886a3bc7ec5d7d5a"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.986610 4970 generic.go:334] "Generic (PLEG): container finished" podID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerID="73043786e2365e4eee1df9a567002faddb9e0514af6a71010734961ca5e65720" exitCode=0 Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.986759 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerDied","Data":"73043786e2365e4eee1df9a567002faddb9e0514af6a71010734961ca5e65720"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.986802 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerStarted","Data":"a04c8ae3a4a6d408b89172cbce2c3ff13898e3d06e1468741aff0fa3fec61443"} Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.991264 4970 generic.go:334] "Generic (PLEG): container finished" podID="7535af89-756e-4e84-b9f3-246296ca252e" containerID="698f40eeefc86b59c0ffe4a50850243c02f1109302e2cfb18448e76237bb76de" exitCode=0 Sep 30 09:48:59 crc kubenswrapper[4970]: I0930 09:48:59.992897 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" event={"ID":"7535af89-756e-4e84-b9f3-246296ca252e","Type":"ContainerDied","Data":"698f40eeefc86b59c0ffe4a50850243c02f1109302e2cfb18448e76237bb76de"} Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.002239 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-x2m28" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.004254 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-k4ph9" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.010962 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.010933509 podStartE2EDuration="2.010933509s" podCreationTimestamp="2025-09-30 09:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:48:59.99767425 +0000 UTC m=+153.069525184" watchObservedRunningTime="2025-09-30 09:49:00.010933509 +0000 UTC m=+153.082784443" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.013096 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.014028 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.019180 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.019364 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.030258 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.032709 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" podStartSLOduration=132.032690457 podStartE2EDuration="2m12.032690457s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:49:00.031698011 +0000 UTC m=+153.103548945" watchObservedRunningTime="2025-09-30 09:49:00.032690457 +0000 UTC m=+153.104541391" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.188925 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.189162 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.292451 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.292507 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.292633 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.328103 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.345672 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.602013 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:00 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:00 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:00 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.602563 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:00 crc kubenswrapper[4970]: I0930 09:49:00.792241 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Sep 30 09:49:00 crc kubenswrapper[4970]: W0930 09:49:00.825432 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod753fd6be_c8e3_42ad_bb18_29083f07a9ac.slice/crio-9744207461a6bb7bbea885805a45a705594f014bfd821417d12a89f1def19316 WatchSource:0}: Error finding container 9744207461a6bb7bbea885805a45a705594f014bfd821417d12a89f1def19316: Status 404 returned error can't find the container with id 9744207461a6bb7bbea885805a45a705594f014bfd821417d12a89f1def19316 Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.026881 4970 generic.go:334] "Generic (PLEG): container finished" podID="afde4377-4f82-4586-913d-5238cb5f9c05" containerID="ab1c7e37ff9621828ef735f74f7ea5f152adda2071f7ae309a936c19bdd58dd0" exitCode=0 Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.027866 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"afde4377-4f82-4586-913d-5238cb5f9c05","Type":"ContainerDied","Data":"ab1c7e37ff9621828ef735f74f7ea5f152adda2071f7ae309a936c19bdd58dd0"} Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.034437 4970 generic.go:334] "Generic (PLEG): container finished" podID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerID="0ea2254f547f8e4eda0f27b47dbd42ebeb467c6762330c7ce9169ccda904d5ff" exitCode=0 Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.034678 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerDied","Data":"0ea2254f547f8e4eda0f27b47dbd42ebeb467c6762330c7ce9169ccda904d5ff"} Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.090811 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"753fd6be-c8e3-42ad-bb18-29083f07a9ac","Type":"ContainerStarted","Data":"9744207461a6bb7bbea885805a45a705594f014bfd821417d12a89f1def19316"} Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.596863 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:01 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:01 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:01 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.597293 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.721152 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.830862 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5bfn\" (UniqueName: \"kubernetes.io/projected/7535af89-756e-4e84-b9f3-246296ca252e-kube-api-access-d5bfn\") pod \"7535af89-756e-4e84-b9f3-246296ca252e\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.831259 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7535af89-756e-4e84-b9f3-246296ca252e-config-volume\") pod \"7535af89-756e-4e84-b9f3-246296ca252e\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.831347 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7535af89-756e-4e84-b9f3-246296ca252e-secret-volume\") pod \"7535af89-756e-4e84-b9f3-246296ca252e\" (UID: \"7535af89-756e-4e84-b9f3-246296ca252e\") " Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.836123 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7535af89-756e-4e84-b9f3-246296ca252e-config-volume" (OuterVolumeSpecName: "config-volume") pod "7535af89-756e-4e84-b9f3-246296ca252e" (UID: "7535af89-756e-4e84-b9f3-246296ca252e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.866225 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7535af89-756e-4e84-b9f3-246296ca252e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7535af89-756e-4e84-b9f3-246296ca252e" (UID: "7535af89-756e-4e84-b9f3-246296ca252e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.893284 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7535af89-756e-4e84-b9f3-246296ca252e-kube-api-access-d5bfn" (OuterVolumeSpecName: "kube-api-access-d5bfn") pod "7535af89-756e-4e84-b9f3-246296ca252e" (UID: "7535af89-756e-4e84-b9f3-246296ca252e"). InnerVolumeSpecName "kube-api-access-d5bfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.934675 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5bfn\" (UniqueName: \"kubernetes.io/projected/7535af89-756e-4e84-b9f3-246296ca252e-kube-api-access-d5bfn\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.934722 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7535af89-756e-4e84-b9f3-246296ca252e-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:01 crc kubenswrapper[4970]: I0930 09:49:01.934733 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7535af89-756e-4e84-b9f3-246296ca252e-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.203685 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"753fd6be-c8e3-42ad-bb18-29083f07a9ac","Type":"ContainerStarted","Data":"77923695ad856fbf35bf04f888a988fbfde5ffb68b5a69e95d98ee5502bee33e"} Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.212233 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.215165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624" event={"ID":"7535af89-756e-4e84-b9f3-246296ca252e","Type":"ContainerDied","Data":"4916c6f85c5cc59363c1aa30c9723870c51dd2657a5a6f1749fb05068be2f572"} Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.215250 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4916c6f85c5cc59363c1aa30c9723870c51dd2657a5a6f1749fb05068be2f572" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.233260 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.233228637 podStartE2EDuration="3.233228637s" podCreationTimestamp="2025-09-30 09:48:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:49:02.226117255 +0000 UTC m=+155.297968189" watchObservedRunningTime="2025-09-30 09:49:02.233228637 +0000 UTC m=+155.305079571" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.592185 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.596933 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:02 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:02 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:02 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.604740 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.653943 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afde4377-4f82-4586-913d-5238cb5f9c05-kubelet-dir\") pod \"afde4377-4f82-4586-913d-5238cb5f9c05\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.654072 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afde4377-4f82-4586-913d-5238cb5f9c05-kube-api-access\") pod \"afde4377-4f82-4586-913d-5238cb5f9c05\" (UID: \"afde4377-4f82-4586-913d-5238cb5f9c05\") " Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.655224 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/afde4377-4f82-4586-913d-5238cb5f9c05-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "afde4377-4f82-4586-913d-5238cb5f9c05" (UID: "afde4377-4f82-4586-913d-5238cb5f9c05"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.668917 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afde4377-4f82-4586-913d-5238cb5f9c05-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "afde4377-4f82-4586-913d-5238cb5f9c05" (UID: "afde4377-4f82-4586-913d-5238cb5f9c05"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.756102 4970 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/afde4377-4f82-4586-913d-5238cb5f9c05-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:02 crc kubenswrapper[4970]: I0930 09:49:02.756137 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/afde4377-4f82-4586-913d-5238cb5f9c05-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:03 crc kubenswrapper[4970]: I0930 09:49:03.272378 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Sep 30 09:49:03 crc kubenswrapper[4970]: I0930 09:49:03.272457 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"afde4377-4f82-4586-913d-5238cb5f9c05","Type":"ContainerDied","Data":"fe33b32626c45f2f6cf53dd0f5e4bc2316ed762000ccf9ca886a3bc7ec5d7d5a"} Sep 30 09:49:03 crc kubenswrapper[4970]: I0930 09:49:03.272604 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe33b32626c45f2f6cf53dd0f5e4bc2316ed762000ccf9ca886a3bc7ec5d7d5a" Sep 30 09:49:03 crc kubenswrapper[4970]: I0930 09:49:03.601832 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:03 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:03 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:03 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:03 crc kubenswrapper[4970]: I0930 09:49:03.601922 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:04 crc kubenswrapper[4970]: I0930 09:49:04.306786 4970 generic.go:334] "Generic (PLEG): container finished" podID="753fd6be-c8e3-42ad-bb18-29083f07a9ac" containerID="77923695ad856fbf35bf04f888a988fbfde5ffb68b5a69e95d98ee5502bee33e" exitCode=0 Sep 30 09:49:04 crc kubenswrapper[4970]: I0930 09:49:04.307520 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"753fd6be-c8e3-42ad-bb18-29083f07a9ac","Type":"ContainerDied","Data":"77923695ad856fbf35bf04f888a988fbfde5ffb68b5a69e95d98ee5502bee33e"} Sep 30 09:49:04 crc kubenswrapper[4970]: I0930 09:49:04.589519 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:04 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:04 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:04 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:04 crc kubenswrapper[4970]: I0930 09:49:04.589576 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:04 crc kubenswrapper[4970]: I0930 09:49:04.821537 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:49:04 crc kubenswrapper[4970]: I0930 09:49:04.821596 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.267192 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-svmb7" Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.591010 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:05 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:05 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:05 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.591078 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.859205 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.947947 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kubelet-dir\") pod \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.948166 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "753fd6be-c8e3-42ad-bb18-29083f07a9ac" (UID: "753fd6be-c8e3-42ad-bb18-29083f07a9ac"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.948367 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kube-api-access\") pod \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\" (UID: \"753fd6be-c8e3-42ad-bb18-29083f07a9ac\") " Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.948763 4970 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kubelet-dir\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:05 crc kubenswrapper[4970]: I0930 09:49:05.957967 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "753fd6be-c8e3-42ad-bb18-29083f07a9ac" (UID: "753fd6be-c8e3-42ad-bb18-29083f07a9ac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:49:06 crc kubenswrapper[4970]: I0930 09:49:06.050878 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/753fd6be-c8e3-42ad-bb18-29083f07a9ac-kube-api-access\") on node \"crc\" DevicePath \"\"" Sep 30 09:49:06 crc kubenswrapper[4970]: I0930 09:49:06.356096 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"753fd6be-c8e3-42ad-bb18-29083f07a9ac","Type":"ContainerDied","Data":"9744207461a6bb7bbea885805a45a705594f014bfd821417d12a89f1def19316"} Sep 30 09:49:06 crc kubenswrapper[4970]: I0930 09:49:06.356184 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Sep 30 09:49:06 crc kubenswrapper[4970]: I0930 09:49:06.356198 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9744207461a6bb7bbea885805a45a705594f014bfd821417d12a89f1def19316" Sep 30 09:49:06 crc kubenswrapper[4970]: I0930 09:49:06.592372 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:06 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:06 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:06 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:06 crc kubenswrapper[4970]: I0930 09:49:06.592463 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:07 crc kubenswrapper[4970]: I0930 09:49:07.592435 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:07 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:07 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:07 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:07 crc kubenswrapper[4970]: I0930 09:49:07.593003 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:08 crc kubenswrapper[4970]: I0930 09:49:08.592033 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:08 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:08 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:08 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:08 crc kubenswrapper[4970]: I0930 09:49:08.592159 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.328156 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7zdf container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.328572 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-f7zdf" podUID="200e46f5-be36-4a88-85d0-fb279eba20c5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.328160 4970 patch_prober.go:28] interesting pod/downloads-7954f5f757-f7zdf container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.329284 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-f7zdf" podUID="200e46f5-be36-4a88-85d0-fb279eba20c5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.394455 4970 patch_prober.go:28] interesting pod/console-f9d7485db-c8bs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.394514 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-c8bs2" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.15:8443/health\": dial tcp 10.217.0.15:8443: connect: connection refused" Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.591380 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:09 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:09 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:09 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:09 crc kubenswrapper[4970]: I0930 09:49:09.591468 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:10 crc kubenswrapper[4970]: I0930 09:49:10.590428 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:10 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:10 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:10 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:10 crc kubenswrapper[4970]: I0930 09:49:10.591131 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:11 crc kubenswrapper[4970]: I0930 09:49:11.540526 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:49:11 crc kubenswrapper[4970]: I0930 09:49:11.546196 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c-metrics-certs\") pod \"network-metrics-daemon-sgksk\" (UID: \"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c\") " pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:49:11 crc kubenswrapper[4970]: I0930 09:49:11.591049 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:11 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:11 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:11 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:11 crc kubenswrapper[4970]: I0930 09:49:11.591160 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:11 crc kubenswrapper[4970]: I0930 09:49:11.725015 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-sgksk" Sep 30 09:49:12 crc kubenswrapper[4970]: I0930 09:49:12.591007 4970 patch_prober.go:28] interesting pod/router-default-5444994796-mnwkr container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Sep 30 09:49:12 crc kubenswrapper[4970]: [-]has-synced failed: reason withheld Sep 30 09:49:12 crc kubenswrapper[4970]: [+]process-running ok Sep 30 09:49:12 crc kubenswrapper[4970]: healthz check failed Sep 30 09:49:12 crc kubenswrapper[4970]: I0930 09:49:12.591578 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-mnwkr" podUID="f9105a16-9a94-4ae6-b78d-9eb3b7b0535b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Sep 30 09:49:13 crc kubenswrapper[4970]: I0930 09:49:13.592922 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:49:13 crc kubenswrapper[4970]: I0930 09:49:13.596748 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-mnwkr" Sep 30 09:49:18 crc kubenswrapper[4970]: I0930 09:49:18.138659 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:49:19 crc kubenswrapper[4970]: I0930 09:49:19.343517 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-f7zdf" Sep 30 09:49:19 crc kubenswrapper[4970]: I0930 09:49:19.429611 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:49:19 crc kubenswrapper[4970]: I0930 09:49:19.433704 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:49:30 crc kubenswrapper[4970]: I0930 09:49:30.229650 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sm49z" Sep 30 09:49:34 crc kubenswrapper[4970]: I0930 09:49:34.821864 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:49:34 crc kubenswrapper[4970]: I0930 09:49:34.822277 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:49:35 crc kubenswrapper[4970]: E0930 09:49:35.274065 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 30 09:49:35 crc kubenswrapper[4970]: E0930 09:49:35.274282 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n67j7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-gjvrm_openshift-marketplace(a516da21-a8ba-423e-85ae-49cb3383e933): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:35 crc kubenswrapper[4970]: E0930 09:49:35.275592 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-gjvrm" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" Sep 30 09:49:35 crc kubenswrapper[4970]: E0930 09:49:35.414608 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Sep 30 09:49:35 crc kubenswrapper[4970]: E0930 09:49:35.414775 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xgrcp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5f7bk_openshift-marketplace(feeba15d-0ba3-40c7-9ddb-cbb90c803826): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:35 crc kubenswrapper[4970]: E0930 09:49:35.416098 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5f7bk" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" Sep 30 09:49:37 crc kubenswrapper[4970]: I0930 09:49:37.060072 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Sep 30 09:49:39 crc kubenswrapper[4970]: E0930 09:49:39.592913 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Sep 30 09:49:39 crc kubenswrapper[4970]: E0930 09:49:39.593109 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5rtcw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-rrdxh_openshift-marketplace(aa02b988-0645-4466-a5bc-9c99033fdcdc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:39 crc kubenswrapper[4970]: E0930 09:49:39.594453 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-rrdxh" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" Sep 30 09:49:41 crc kubenswrapper[4970]: E0930 09:49:41.659210 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-rrdxh" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" Sep 30 09:49:41 crc kubenswrapper[4970]: E0930 09:49:41.659306 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5f7bk" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" Sep 30 09:49:41 crc kubenswrapper[4970]: E0930 09:49:41.659321 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-gjvrm" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" Sep 30 09:49:42 crc kubenswrapper[4970]: E0930 09:49:42.292962 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Sep 30 09:49:42 crc kubenswrapper[4970]: E0930 09:49:42.293161 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cb8r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rg42w_openshift-marketplace(8140e36b-113c-4dbb-982d-4f94ec7c0a5f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:42 crc kubenswrapper[4970]: E0930 09:49:42.294423 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rg42w" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" Sep 30 09:49:44 crc kubenswrapper[4970]: E0930 09:49:44.938964 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rg42w" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.026723 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.027457 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w6ggg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pwvxr_openshift-marketplace(e9332522-1cef-496d-a3f4-eaa0514c5b81): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.028005 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.028115 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqsk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2768z_openshift-marketplace(0d947653-6517-44e3-8673-5c0bcc7aebae): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.029281 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pwvxr" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.029471 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2768z" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.072890 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.073076 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4579,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rdx4x_openshift-marketplace(90331118-d5f2-4547-84b6-2aaf229316f0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.074867 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rdx4x" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.081713 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.082022 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tszq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hlzvg_openshift-marketplace(c3b26d6c-14bb-4572-87c8-6da4476f88a4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.083227 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hlzvg" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" Sep 30 09:49:45 crc kubenswrapper[4970]: I0930 09:49:45.146651 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-sgksk"] Sep 30 09:49:45 crc kubenswrapper[4970]: I0930 09:49:45.647318 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sgksk" event={"ID":"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c","Type":"ContainerStarted","Data":"8bfbd3ceb637d70925516bf7eab9d9433ed31833ae5db2feb530252ef60d4a82"} Sep 30 09:49:45 crc kubenswrapper[4970]: I0930 09:49:45.647511 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sgksk" event={"ID":"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c","Type":"ContainerStarted","Data":"4992d39e0e3ab4686e1c2032083e1cb36956821ee760419397e95507dd242748"} Sep 30 09:49:45 crc kubenswrapper[4970]: I0930 09:49:45.647542 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-sgksk" event={"ID":"8fbb2be8-a8c3-4994-abdb-0a3e8a5f417c","Type":"ContainerStarted","Data":"7b9e85f4c164aab465240e3d42ea960f31c0e3403a05a375976c378f6b1369c5"} Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.648647 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pwvxr" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.648947 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2768z" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.650281 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hlzvg" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" Sep 30 09:49:45 crc kubenswrapper[4970]: E0930 09:49:45.650340 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rdx4x" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" Sep 30 09:49:45 crc kubenswrapper[4970]: I0930 09:49:45.701507 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-sgksk" podStartSLOduration=177.70148925 podStartE2EDuration="2m57.70148925s" podCreationTimestamp="2025-09-30 09:46:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:49:45.699602061 +0000 UTC m=+198.771453005" watchObservedRunningTime="2025-09-30 09:49:45.70148925 +0000 UTC m=+198.773340184" Sep 30 09:49:56 crc kubenswrapper[4970]: I0930 09:49:56.758901 4970 generic.go:334] "Generic (PLEG): container finished" podID="a516da21-a8ba-423e-85ae-49cb3383e933" containerID="15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b" exitCode=0 Sep 30 09:49:56 crc kubenswrapper[4970]: I0930 09:49:56.759041 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjvrm" event={"ID":"a516da21-a8ba-423e-85ae-49cb3383e933","Type":"ContainerDied","Data":"15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b"} Sep 30 09:49:56 crc kubenswrapper[4970]: I0930 09:49:56.764640 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrdxh" event={"ID":"aa02b988-0645-4466-a5bc-9c99033fdcdc","Type":"ContainerDied","Data":"76dd372d46773f0c7132b9f8d8a900dedcbc2ad6c8ca7faacc3fc44691ba8895"} Sep 30 09:49:56 crc kubenswrapper[4970]: I0930 09:49:56.765215 4970 generic.go:334] "Generic (PLEG): container finished" podID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerID="76dd372d46773f0c7132b9f8d8a900dedcbc2ad6c8ca7faacc3fc44691ba8895" exitCode=0 Sep 30 09:49:56 crc kubenswrapper[4970]: I0930 09:49:56.774402 4970 generic.go:334] "Generic (PLEG): container finished" podID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerID="a0dcf80b6d7a258a7b2dd710a8c944f7f946db74239b2d96842150470871f774" exitCode=0 Sep 30 09:49:56 crc kubenswrapper[4970]: I0930 09:49:56.774463 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f7bk" event={"ID":"feeba15d-0ba3-40c7-9ddb-cbb90c803826","Type":"ContainerDied","Data":"a0dcf80b6d7a258a7b2dd710a8c944f7f946db74239b2d96842150470871f774"} Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.806081 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerStarted","Data":"83875cd2698c78595fa1006ee8a47e5b4bbd63a08f19f5137e53b68dd1dbaf6a"} Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.810214 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f7bk" event={"ID":"feeba15d-0ba3-40c7-9ddb-cbb90c803826","Type":"ContainerStarted","Data":"5081791e0ae54d1f1687ca38d7270f7688b5019ade031a0ea17350bc529e0435"} Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.812535 4970 generic.go:334] "Generic (PLEG): container finished" podID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerID="e1b46706c4b0bc69c713985077f7ff82f23d7ec17e4e428d3cc4225bb5465dda" exitCode=0 Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.812615 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg42w" event={"ID":"8140e36b-113c-4dbb-982d-4f94ec7c0a5f","Type":"ContainerDied","Data":"e1b46706c4b0bc69c713985077f7ff82f23d7ec17e4e428d3cc4225bb5465dda"} Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.821829 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjvrm" event={"ID":"a516da21-a8ba-423e-85ae-49cb3383e933","Type":"ContainerStarted","Data":"ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf"} Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.824778 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrdxh" event={"ID":"aa02b988-0645-4466-a5bc-9c99033fdcdc","Type":"ContainerStarted","Data":"d65937ced1ac37265f872dc261dbdce7b4b5c270e7bf92cecdc35fb469184de2"} Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.869668 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rrdxh" podStartSLOduration=4.325582092 podStartE2EDuration="1m2.869636843s" podCreationTimestamp="2025-09-30 09:48:55 +0000 UTC" firstStartedPulling="2025-09-30 09:48:58.838247455 +0000 UTC m=+151.910098389" lastFinishedPulling="2025-09-30 09:49:57.382302196 +0000 UTC m=+210.454153140" observedRunningTime="2025-09-30 09:49:57.869234372 +0000 UTC m=+210.941085336" watchObservedRunningTime="2025-09-30 09:49:57.869636843 +0000 UTC m=+210.941487817" Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.922253 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gjvrm" podStartSLOduration=3.350088419 podStartE2EDuration="1m2.922228344s" podCreationTimestamp="2025-09-30 09:48:55 +0000 UTC" firstStartedPulling="2025-09-30 09:48:57.720644851 +0000 UTC m=+150.792495785" lastFinishedPulling="2025-09-30 09:49:57.292784746 +0000 UTC m=+210.364635710" observedRunningTime="2025-09-30 09:49:57.918212958 +0000 UTC m=+210.990063922" watchObservedRunningTime="2025-09-30 09:49:57.922228344 +0000 UTC m=+210.994079278" Sep 30 09:49:57 crc kubenswrapper[4970]: I0930 09:49:57.942638 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5f7bk" podStartSLOduration=3.265931829 podStartE2EDuration="1m2.942614999s" podCreationTimestamp="2025-09-30 09:48:55 +0000 UTC" firstStartedPulling="2025-09-30 09:48:57.673529624 +0000 UTC m=+150.745380568" lastFinishedPulling="2025-09-30 09:49:57.350212794 +0000 UTC m=+210.422063738" observedRunningTime="2025-09-30 09:49:57.939578629 +0000 UTC m=+211.011429603" watchObservedRunningTime="2025-09-30 09:49:57.942614999 +0000 UTC m=+211.014465933" Sep 30 09:49:58 crc kubenswrapper[4970]: I0930 09:49:58.832820 4970 generic.go:334] "Generic (PLEG): container finished" podID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerID="83875cd2698c78595fa1006ee8a47e5b4bbd63a08f19f5137e53b68dd1dbaf6a" exitCode=0 Sep 30 09:49:58 crc kubenswrapper[4970]: I0930 09:49:58.832912 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerDied","Data":"83875cd2698c78595fa1006ee8a47e5b4bbd63a08f19f5137e53b68dd1dbaf6a"} Sep 30 09:49:58 crc kubenswrapper[4970]: I0930 09:49:58.836495 4970 generic.go:334] "Generic (PLEG): container finished" podID="90331118-d5f2-4547-84b6-2aaf229316f0" containerID="65f1ce25e67189ca516c424bf2d97da69e4a255a71690b35dfdc6f338267cfe2" exitCode=0 Sep 30 09:49:58 crc kubenswrapper[4970]: I0930 09:49:58.836603 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdx4x" event={"ID":"90331118-d5f2-4547-84b6-2aaf229316f0","Type":"ContainerDied","Data":"65f1ce25e67189ca516c424bf2d97da69e4a255a71690b35dfdc6f338267cfe2"} Sep 30 09:49:58 crc kubenswrapper[4970]: I0930 09:49:58.841437 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg42w" event={"ID":"8140e36b-113c-4dbb-982d-4f94ec7c0a5f","Type":"ContainerStarted","Data":"8ea7410cbe711382738e9417de1c75bae7c86413c1b8980b4384062de52f2dba"} Sep 30 09:49:58 crc kubenswrapper[4970]: I0930 09:49:58.901279 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rg42w" podStartSLOduration=2.300742181 podStartE2EDuration="1m1.90125102s" podCreationTimestamp="2025-09-30 09:48:57 +0000 UTC" firstStartedPulling="2025-09-30 09:48:58.803881545 +0000 UTC m=+151.875732479" lastFinishedPulling="2025-09-30 09:49:58.404390384 +0000 UTC m=+211.476241318" observedRunningTime="2025-09-30 09:49:58.899050203 +0000 UTC m=+211.970901157" watchObservedRunningTime="2025-09-30 09:49:58.90125102 +0000 UTC m=+211.973101954" Sep 30 09:49:59 crc kubenswrapper[4970]: I0930 09:49:59.852810 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdx4x" event={"ID":"90331118-d5f2-4547-84b6-2aaf229316f0","Type":"ContainerStarted","Data":"deb24a48c84c6a6b5715e0640736ebf2009635efb69acd7bae791eaee9c6565a"} Sep 30 09:49:59 crc kubenswrapper[4970]: I0930 09:49:59.881674 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rdx4x" podStartSLOduration=3.45781538 podStartE2EDuration="1m2.881650094s" podCreationTimestamp="2025-09-30 09:48:57 +0000 UTC" firstStartedPulling="2025-09-30 09:48:59.955166841 +0000 UTC m=+153.027017785" lastFinishedPulling="2025-09-30 09:49:59.379001555 +0000 UTC m=+212.450852499" observedRunningTime="2025-09-30 09:49:59.880715329 +0000 UTC m=+212.952566263" watchObservedRunningTime="2025-09-30 09:49:59.881650094 +0000 UTC m=+212.953501028" Sep 30 09:50:00 crc kubenswrapper[4970]: I0930 09:50:00.859850 4970 generic.go:334] "Generic (PLEG): container finished" podID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerID="3922939dedf80484f6d92ab28fcf7f632eccd1b5a680730f8a77937b5bf3e736" exitCode=0 Sep 30 09:50:00 crc kubenswrapper[4970]: I0930 09:50:00.859927 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2768z" event={"ID":"0d947653-6517-44e3-8673-5c0bcc7aebae","Type":"ContainerDied","Data":"3922939dedf80484f6d92ab28fcf7f632eccd1b5a680730f8a77937b5bf3e736"} Sep 30 09:50:00 crc kubenswrapper[4970]: I0930 09:50:00.864144 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerStarted","Data":"b88c24326c0d9a22363720c8b38f186c5823bf2246ea91728123f08cf9e5d749"} Sep 30 09:50:00 crc kubenswrapper[4970]: I0930 09:50:00.904390 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pwvxr" podStartSLOduration=4.230138745 podStartE2EDuration="1m2.904364477s" podCreationTimestamp="2025-09-30 09:48:58 +0000 UTC" firstStartedPulling="2025-09-30 09:49:01.055597185 +0000 UTC m=+154.127448119" lastFinishedPulling="2025-09-30 09:49:59.729822917 +0000 UTC m=+212.801673851" observedRunningTime="2025-09-30 09:50:00.897466746 +0000 UTC m=+213.969317680" watchObservedRunningTime="2025-09-30 09:50:00.904364477 +0000 UTC m=+213.976215421" Sep 30 09:50:01 crc kubenswrapper[4970]: I0930 09:50:01.874613 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2768z" event={"ID":"0d947653-6517-44e3-8673-5c0bcc7aebae","Type":"ContainerStarted","Data":"87b74c6b7c67e7a55385d875f36be87ff936030c2368e72200b7c3b2910d3fda"} Sep 30 09:50:01 crc kubenswrapper[4970]: I0930 09:50:01.876578 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerStarted","Data":"d1c854b3050c7876390ca8b777641121a83a0d05bc27bee2abad7ab7706d2177"} Sep 30 09:50:01 crc kubenswrapper[4970]: I0930 09:50:01.910316 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2768z" podStartSLOduration=4.444124756 podStartE2EDuration="1m6.91029052s" podCreationTimestamp="2025-09-30 09:48:55 +0000 UTC" firstStartedPulling="2025-09-30 09:48:58.912919568 +0000 UTC m=+151.984770502" lastFinishedPulling="2025-09-30 09:50:01.379085332 +0000 UTC m=+214.450936266" observedRunningTime="2025-09-30 09:50:01.906383788 +0000 UTC m=+214.978234742" watchObservedRunningTime="2025-09-30 09:50:01.91029052 +0000 UTC m=+214.982141494" Sep 30 09:50:02 crc kubenswrapper[4970]: I0930 09:50:02.884548 4970 generic.go:334] "Generic (PLEG): container finished" podID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerID="d1c854b3050c7876390ca8b777641121a83a0d05bc27bee2abad7ab7706d2177" exitCode=0 Sep 30 09:50:02 crc kubenswrapper[4970]: I0930 09:50:02.884643 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerDied","Data":"d1c854b3050c7876390ca8b777641121a83a0d05bc27bee2abad7ab7706d2177"} Sep 30 09:50:03 crc kubenswrapper[4970]: I0930 09:50:03.892521 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerStarted","Data":"104fce426f0a448287432d5b80da2c139f0be3c1b147a9c4e6f2c1b0dc08a0dc"} Sep 30 09:50:03 crc kubenswrapper[4970]: I0930 09:50:03.912643 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hlzvg" podStartSLOduration=2.341603906 podStartE2EDuration="1m5.912620397s" podCreationTimestamp="2025-09-30 09:48:58 +0000 UTC" firstStartedPulling="2025-09-30 09:48:59.988919535 +0000 UTC m=+153.060770469" lastFinishedPulling="2025-09-30 09:50:03.559936026 +0000 UTC m=+216.631786960" observedRunningTime="2025-09-30 09:50:03.910319766 +0000 UTC m=+216.982170710" watchObservedRunningTime="2025-09-30 09:50:03.912620397 +0000 UTC m=+216.984471331" Sep 30 09:50:04 crc kubenswrapper[4970]: I0930 09:50:04.822323 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:50:04 crc kubenswrapper[4970]: I0930 09:50:04.822861 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:50:04 crc kubenswrapper[4970]: I0930 09:50:04.822934 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:50:04 crc kubenswrapper[4970]: I0930 09:50:04.823735 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 09:50:04 crc kubenswrapper[4970]: I0930 09:50:04.823876 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba" gracePeriod=600 Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.425800 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.426656 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.608318 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.905542 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba" exitCode=0 Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.906526 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba"} Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.906568 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"3256d43a82b3895388a142b25e718f337799e9f30680508a3d1264e9ba385ed1"} Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.959430 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.959494 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:50:05 crc kubenswrapper[4970]: I0930 09:50:05.961255 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.021407 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.037040 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.037100 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.076393 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.198651 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.199031 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.240600 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.953746 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.958442 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:50:06 crc kubenswrapper[4970]: I0930 09:50:06.958786 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:50:07 crc kubenswrapper[4970]: I0930 09:50:07.862292 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:50:07 crc kubenswrapper[4970]: I0930 09:50:07.862809 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:50:07 crc kubenswrapper[4970]: I0930 09:50:07.907483 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:50:07 crc kubenswrapper[4970]: I0930 09:50:07.967327 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.123026 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5f7bk"] Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.309942 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.310398 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.363404 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.843944 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.844156 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.905122 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.927545 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5f7bk" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="registry-server" containerID="cri-o://5081791e0ae54d1f1687ca38d7270f7688b5019ade031a0ea17350bc529e0435" gracePeriod=2 Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.971027 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:50:08 crc kubenswrapper[4970]: I0930 09:50:08.973810 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.240401 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.240465 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.283463 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.917794 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2768z"] Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.939028 4970 generic.go:334] "Generic (PLEG): container finished" podID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerID="5081791e0ae54d1f1687ca38d7270f7688b5019ade031a0ea17350bc529e0435" exitCode=0 Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.939242 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f7bk" event={"ID":"feeba15d-0ba3-40c7-9ddb-cbb90c803826","Type":"ContainerDied","Data":"5081791e0ae54d1f1687ca38d7270f7688b5019ade031a0ea17350bc529e0435"} Sep 30 09:50:09 crc kubenswrapper[4970]: I0930 09:50:09.939631 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2768z" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="registry-server" containerID="cri-o://87b74c6b7c67e7a55385d875f36be87ff936030c2368e72200b7c3b2910d3fda" gracePeriod=2 Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.017151 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.424634 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.520128 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdx4x"] Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.546563 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-utilities\") pod \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.546612 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgrcp\" (UniqueName: \"kubernetes.io/projected/feeba15d-0ba3-40c7-9ddb-cbb90c803826-kube-api-access-xgrcp\") pod \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.546662 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-catalog-content\") pod \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\" (UID: \"feeba15d-0ba3-40c7-9ddb-cbb90c803826\") " Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.547551 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-utilities" (OuterVolumeSpecName: "utilities") pod "feeba15d-0ba3-40c7-9ddb-cbb90c803826" (UID: "feeba15d-0ba3-40c7-9ddb-cbb90c803826"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.557262 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feeba15d-0ba3-40c7-9ddb-cbb90c803826-kube-api-access-xgrcp" (OuterVolumeSpecName: "kube-api-access-xgrcp") pod "feeba15d-0ba3-40c7-9ddb-cbb90c803826" (UID: "feeba15d-0ba3-40c7-9ddb-cbb90c803826"). InnerVolumeSpecName "kube-api-access-xgrcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.611224 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feeba15d-0ba3-40c7-9ddb-cbb90c803826" (UID: "feeba15d-0ba3-40c7-9ddb-cbb90c803826"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.648153 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.648209 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feeba15d-0ba3-40c7-9ddb-cbb90c803826-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.648221 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgrcp\" (UniqueName: \"kubernetes.io/projected/feeba15d-0ba3-40c7-9ddb-cbb90c803826-kube-api-access-xgrcp\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.946062 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5f7bk" event={"ID":"feeba15d-0ba3-40c7-9ddb-cbb90c803826","Type":"ContainerDied","Data":"a33e4917f30511844f9380918f28173e5a217b8373a05e0e002724fdbc4b588e"} Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.946173 4970 scope.go:117] "RemoveContainer" containerID="5081791e0ae54d1f1687ca38d7270f7688b5019ade031a0ea17350bc529e0435" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.946206 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5f7bk" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.965801 4970 scope.go:117] "RemoveContainer" containerID="a0dcf80b6d7a258a7b2dd710a8c944f7f946db74239b2d96842150470871f774" Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.977762 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5f7bk"] Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.988466 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5f7bk"] Sep 30 09:50:10 crc kubenswrapper[4970]: I0930 09:50:10.994648 4970 scope.go:117] "RemoveContainer" containerID="0bca685823d0172f0c7d9eedaaacc38f65d1acb41cd7be2807845c1c39414a43" Sep 30 09:50:11 crc kubenswrapper[4970]: I0930 09:50:11.675485 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" path="/var/lib/kubelet/pods/feeba15d-0ba3-40c7-9ddb-cbb90c803826/volumes" Sep 30 09:50:11 crc kubenswrapper[4970]: I0930 09:50:11.956217 4970 generic.go:334] "Generic (PLEG): container finished" podID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerID="87b74c6b7c67e7a55385d875f36be87ff936030c2368e72200b7c3b2910d3fda" exitCode=0 Sep 30 09:50:11 crc kubenswrapper[4970]: I0930 09:50:11.956367 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2768z" event={"ID":"0d947653-6517-44e3-8673-5c0bcc7aebae","Type":"ContainerDied","Data":"87b74c6b7c67e7a55385d875f36be87ff936030c2368e72200b7c3b2910d3fda"} Sep 30 09:50:11 crc kubenswrapper[4970]: I0930 09:50:11.956464 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rdx4x" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="registry-server" containerID="cri-o://deb24a48c84c6a6b5715e0640736ebf2009635efb69acd7bae791eaee9c6565a" gracePeriod=2 Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.638820 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.775957 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-catalog-content\") pod \"0d947653-6517-44e3-8673-5c0bcc7aebae\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.776026 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-utilities\") pod \"0d947653-6517-44e3-8673-5c0bcc7aebae\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.776184 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqsk9\" (UniqueName: \"kubernetes.io/projected/0d947653-6517-44e3-8673-5c0bcc7aebae-kube-api-access-vqsk9\") pod \"0d947653-6517-44e3-8673-5c0bcc7aebae\" (UID: \"0d947653-6517-44e3-8673-5c0bcc7aebae\") " Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.779097 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-utilities" (OuterVolumeSpecName: "utilities") pod "0d947653-6517-44e3-8673-5c0bcc7aebae" (UID: "0d947653-6517-44e3-8673-5c0bcc7aebae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.786876 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d947653-6517-44e3-8673-5c0bcc7aebae-kube-api-access-vqsk9" (OuterVolumeSpecName: "kube-api-access-vqsk9") pod "0d947653-6517-44e3-8673-5c0bcc7aebae" (UID: "0d947653-6517-44e3-8673-5c0bcc7aebae"). InnerVolumeSpecName "kube-api-access-vqsk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.828831 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d947653-6517-44e3-8673-5c0bcc7aebae" (UID: "0d947653-6517-44e3-8673-5c0bcc7aebae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.878321 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.878360 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d947653-6517-44e3-8673-5c0bcc7aebae-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.878370 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqsk9\" (UniqueName: \"kubernetes.io/projected/0d947653-6517-44e3-8673-5c0bcc7aebae-kube-api-access-vqsk9\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.922354 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwvxr"] Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.922707 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pwvxr" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="registry-server" containerID="cri-o://b88c24326c0d9a22363720c8b38f186c5823bf2246ea91728123f08cf9e5d749" gracePeriod=2 Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.967934 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2768z" event={"ID":"0d947653-6517-44e3-8673-5c0bcc7aebae","Type":"ContainerDied","Data":"689c96d8f4e0ead2514c0ba9b738e9b63937b9b1fe763a728aa5e29f181d8ff0"} Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.968068 4970 scope.go:117] "RemoveContainer" containerID="87b74c6b7c67e7a55385d875f36be87ff936030c2368e72200b7c3b2910d3fda" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.968296 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2768z" Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.973186 4970 generic.go:334] "Generic (PLEG): container finished" podID="90331118-d5f2-4547-84b6-2aaf229316f0" containerID="deb24a48c84c6a6b5715e0640736ebf2009635efb69acd7bae791eaee9c6565a" exitCode=0 Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.973242 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdx4x" event={"ID":"90331118-d5f2-4547-84b6-2aaf229316f0","Type":"ContainerDied","Data":"deb24a48c84c6a6b5715e0640736ebf2009635efb69acd7bae791eaee9c6565a"} Sep 30 09:50:12 crc kubenswrapper[4970]: I0930 09:50:12.990875 4970 scope.go:117] "RemoveContainer" containerID="3922939dedf80484f6d92ab28fcf7f632eccd1b5a680730f8a77937b5bf3e736" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.014193 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2768z"] Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.014356 4970 scope.go:117] "RemoveContainer" containerID="3ca9536830501148160df89bf28a4d81a247974ff112ca94b0d7d6f5dd37cf32" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.017658 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2768z"] Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.455555 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.587016 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-utilities\") pod \"90331118-d5f2-4547-84b6-2aaf229316f0\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.587146 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-catalog-content\") pod \"90331118-d5f2-4547-84b6-2aaf229316f0\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.587275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4579\" (UniqueName: \"kubernetes.io/projected/90331118-d5f2-4547-84b6-2aaf229316f0-kube-api-access-q4579\") pod \"90331118-d5f2-4547-84b6-2aaf229316f0\" (UID: \"90331118-d5f2-4547-84b6-2aaf229316f0\") " Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.587766 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-utilities" (OuterVolumeSpecName: "utilities") pod "90331118-d5f2-4547-84b6-2aaf229316f0" (UID: "90331118-d5f2-4547-84b6-2aaf229316f0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.591403 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90331118-d5f2-4547-84b6-2aaf229316f0-kube-api-access-q4579" (OuterVolumeSpecName: "kube-api-access-q4579") pod "90331118-d5f2-4547-84b6-2aaf229316f0" (UID: "90331118-d5f2-4547-84b6-2aaf229316f0"). InnerVolumeSpecName "kube-api-access-q4579". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.601727 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "90331118-d5f2-4547-84b6-2aaf229316f0" (UID: "90331118-d5f2-4547-84b6-2aaf229316f0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.679765 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" path="/var/lib/kubelet/pods/0d947653-6517-44e3-8673-5c0bcc7aebae/volumes" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.689072 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.689095 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/90331118-d5f2-4547-84b6-2aaf229316f0-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.689116 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4579\" (UniqueName: \"kubernetes.io/projected/90331118-d5f2-4547-84b6-2aaf229316f0-kube-api-access-q4579\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.980620 4970 generic.go:334] "Generic (PLEG): container finished" podID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerID="b88c24326c0d9a22363720c8b38f186c5823bf2246ea91728123f08cf9e5d749" exitCode=0 Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.980681 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerDied","Data":"b88c24326c0d9a22363720c8b38f186c5823bf2246ea91728123f08cf9e5d749"} Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.982165 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rdx4x" event={"ID":"90331118-d5f2-4547-84b6-2aaf229316f0","Type":"ContainerDied","Data":"b70bfe8a3da17f3546a023922df2b69af1dc827af58d032912e349cbe1139171"} Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.982202 4970 scope.go:117] "RemoveContainer" containerID="deb24a48c84c6a6b5715e0640736ebf2009635efb69acd7bae791eaee9c6565a" Sep 30 09:50:13 crc kubenswrapper[4970]: I0930 09:50:13.982295 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rdx4x" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.001293 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdx4x"] Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.004200 4970 scope.go:117] "RemoveContainer" containerID="65f1ce25e67189ca516c424bf2d97da69e4a255a71690b35dfdc6f338267cfe2" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.004558 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rdx4x"] Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.022424 4970 scope.go:117] "RemoveContainer" containerID="136bd80c7700147705c9cbb3b6c15a94c81cec134b7fcca1d010de3f986ab9ff" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.303755 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.402018 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-utilities\") pod \"e9332522-1cef-496d-a3f4-eaa0514c5b81\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.402082 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-catalog-content\") pod \"e9332522-1cef-496d-a3f4-eaa0514c5b81\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.402183 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6ggg\" (UniqueName: \"kubernetes.io/projected/e9332522-1cef-496d-a3f4-eaa0514c5b81-kube-api-access-w6ggg\") pod \"e9332522-1cef-496d-a3f4-eaa0514c5b81\" (UID: \"e9332522-1cef-496d-a3f4-eaa0514c5b81\") " Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.404045 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-utilities" (OuterVolumeSpecName: "utilities") pod "e9332522-1cef-496d-a3f4-eaa0514c5b81" (UID: "e9332522-1cef-496d-a3f4-eaa0514c5b81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.408960 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9332522-1cef-496d-a3f4-eaa0514c5b81-kube-api-access-w6ggg" (OuterVolumeSpecName: "kube-api-access-w6ggg") pod "e9332522-1cef-496d-a3f4-eaa0514c5b81" (UID: "e9332522-1cef-496d-a3f4-eaa0514c5b81"). InnerVolumeSpecName "kube-api-access-w6ggg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.499955 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9332522-1cef-496d-a3f4-eaa0514c5b81" (UID: "e9332522-1cef-496d-a3f4-eaa0514c5b81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.504374 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6ggg\" (UniqueName: \"kubernetes.io/projected/e9332522-1cef-496d-a3f4-eaa0514c5b81-kube-api-access-w6ggg\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.504430 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.504445 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9332522-1cef-496d-a3f4-eaa0514c5b81-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.993370 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pwvxr" event={"ID":"e9332522-1cef-496d-a3f4-eaa0514c5b81","Type":"ContainerDied","Data":"fac8cb3cec98057c739170324fb21d79719599758a703e6f7b646421d45db9bf"} Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.994947 4970 scope.go:117] "RemoveContainer" containerID="b88c24326c0d9a22363720c8b38f186c5823bf2246ea91728123f08cf9e5d749" Sep 30 09:50:14 crc kubenswrapper[4970]: I0930 09:50:14.993444 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pwvxr" Sep 30 09:50:15 crc kubenswrapper[4970]: I0930 09:50:15.013442 4970 scope.go:117] "RemoveContainer" containerID="83875cd2698c78595fa1006ee8a47e5b4bbd63a08f19f5137e53b68dd1dbaf6a" Sep 30 09:50:15 crc kubenswrapper[4970]: I0930 09:50:15.025837 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pwvxr"] Sep 30 09:50:15 crc kubenswrapper[4970]: I0930 09:50:15.031579 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pwvxr"] Sep 30 09:50:15 crc kubenswrapper[4970]: I0930 09:50:15.043447 4970 scope.go:117] "RemoveContainer" containerID="0ea2254f547f8e4eda0f27b47dbd42ebeb467c6762330c7ce9169ccda904d5ff" Sep 30 09:50:15 crc kubenswrapper[4970]: I0930 09:50:15.675330 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" path="/var/lib/kubelet/pods/90331118-d5f2-4547-84b6-2aaf229316f0/volumes" Sep 30 09:50:15 crc kubenswrapper[4970]: I0930 09:50:15.676223 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" path="/var/lib/kubelet/pods/e9332522-1cef-496d-a3f4-eaa0514c5b81/volumes" Sep 30 09:50:19 crc kubenswrapper[4970]: I0930 09:50:19.201086 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gtdq5"] Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.226148 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" podUID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" containerName="oauth-openshift" containerID="cri-o://6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568" gracePeriod=15 Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.674103 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.715727 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-6kmj7"] Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716033 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716047 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716058 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716064 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716073 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7535af89-756e-4e84-b9f3-246296ca252e" containerName="collect-profiles" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716080 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7535af89-756e-4e84-b9f3-246296ca252e" containerName="collect-profiles" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716089 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716094 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716105 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716113 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716120 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716126 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716134 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="753fd6be-c8e3-42ad-bb18-29083f07a9ac" containerName="pruner" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716140 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="753fd6be-c8e3-42ad-bb18-29083f07a9ac" containerName="pruner" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716152 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716158 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716167 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716173 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716182 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afde4377-4f82-4586-913d-5238cb5f9c05" containerName="pruner" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716188 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="afde4377-4f82-4586-913d-5238cb5f9c05" containerName="pruner" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716195 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716200 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716211 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716216 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716225 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716231 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716238 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716244 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="extract-utilities" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716252 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716257 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="extract-content" Sep 30 09:50:44 crc kubenswrapper[4970]: E0930 09:50:44.716268 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" containerName="oauth-openshift" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716274 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" containerName="oauth-openshift" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716371 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9332522-1cef-496d-a3f4-eaa0514c5b81" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716381 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7535af89-756e-4e84-b9f3-246296ca252e" containerName="collect-profiles" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716392 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" containerName="oauth-openshift" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716402 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="753fd6be-c8e3-42ad-bb18-29083f07a9ac" containerName="pruner" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716410 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="afde4377-4f82-4586-913d-5238cb5f9c05" containerName="pruner" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716417 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d947653-6517-44e3-8673-5c0bcc7aebae" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716422 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="90331118-d5f2-4547-84b6-2aaf229316f0" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716427 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="feeba15d-0ba3-40c7-9ddb-cbb90c803826" containerName="registry-server" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.716857 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.741393 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-6kmj7"] Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.787622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-login\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.787708 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78m26\" (UniqueName: \"kubernetes.io/projected/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-kube-api-access-78m26\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.787769 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-trusted-ca-bundle\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.787821 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-dir\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.787889 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-idp-0-file-data\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.787941 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-cliconfig\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788014 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-serving-cert\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788005 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788081 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-provider-selection\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788126 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-error\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788418 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-ocp-branding-template\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788817 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788865 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-service-ca\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788925 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-session\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.788962 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-router-certs\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789025 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-policies\") pod \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\" (UID: \"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e\") " Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789131 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789536 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789910 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789932 4970 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-dir\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789948 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.789961 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.790196 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.795809 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.795957 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-kube-api-access-78m26" (OuterVolumeSpecName: "kube-api-access-78m26") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "kube-api-access-78m26". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.796187 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.796241 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.796865 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.797280 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.797473 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.797769 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.799263 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" (UID: "0cf6b829-19f3-40d1-8bbd-294cf38f6b1e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.891645 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.891735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.891788 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-audit-dir\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.891874 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.891922 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xjv\" (UniqueName: \"kubernetes.io/projected/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-kube-api-access-94xjv\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.891953 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892009 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892030 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892049 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-audit-policies\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892067 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892173 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892217 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892261 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892317 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892361 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892421 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892434 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892444 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892454 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892464 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892496 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892508 4970 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-audit-policies\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892518 4970 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.892527 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78m26\" (UniqueName: \"kubernetes.io/projected/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e-kube-api-access-78m26\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993746 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-audit-dir\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993806 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993835 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94xjv\" (UniqueName: \"kubernetes.io/projected/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-kube-api-access-94xjv\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993855 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993887 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993901 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-audit-dir\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.993926 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-audit-policies\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994195 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994303 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994360 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994429 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994475 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994574 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.994627 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.995162 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.995551 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-audit-policies\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.995663 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-service-ca\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.995915 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.999697 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-session\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:44 crc kubenswrapper[4970]: I0930 09:50:44.999751 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-error\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.000316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-router-certs\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.000572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.000704 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.000809 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.001267 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.002606 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-v4-0-config-user-template-login\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.023537 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94xjv\" (UniqueName: \"kubernetes.io/projected/8b2851c0-fd5a-424e-bae0-c32ee9cb240e-kube-api-access-94xjv\") pod \"oauth-openshift-56c7c74f4-6kmj7\" (UID: \"8b2851c0-fd5a-424e-bae0-c32ee9cb240e\") " pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.051745 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.185267 4970 generic.go:334] "Generic (PLEG): container finished" podID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" containerID="6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568" exitCode=0 Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.185821 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" event={"ID":"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e","Type":"ContainerDied","Data":"6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568"} Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.185877 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" event={"ID":"0cf6b829-19f3-40d1-8bbd-294cf38f6b1e","Type":"ContainerDied","Data":"9634f018872bbde8500db6d38082010fd73b161c1b239ef85f130a9298facb56"} Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.185915 4970 scope.go:117] "RemoveContainer" containerID="6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.186265 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-gtdq5" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.236490 4970 scope.go:117] "RemoveContainer" containerID="6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568" Sep 30 09:50:45 crc kubenswrapper[4970]: E0930 09:50:45.239093 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568\": container with ID starting with 6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568 not found: ID does not exist" containerID="6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.239161 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568"} err="failed to get container status \"6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568\": rpc error: code = NotFound desc = could not find container \"6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568\": container with ID starting with 6ba3f9fed0ff246c19192097e86d0e4fcb29a8a4d037e980870162de0406c568 not found: ID does not exist" Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.239214 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gtdq5"] Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.245146 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-gtdq5"] Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.365753 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56c7c74f4-6kmj7"] Sep 30 09:50:45 crc kubenswrapper[4970]: I0930 09:50:45.681209 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cf6b829-19f3-40d1-8bbd-294cf38f6b1e" path="/var/lib/kubelet/pods/0cf6b829-19f3-40d1-8bbd-294cf38f6b1e/volumes" Sep 30 09:50:46 crc kubenswrapper[4970]: I0930 09:50:46.195283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" event={"ID":"8b2851c0-fd5a-424e-bae0-c32ee9cb240e","Type":"ContainerStarted","Data":"f5af1f11779a45393d1ad3ccdb1b279484cf87dd3308af4875451d3fb3f2e8fa"} Sep 30 09:50:46 crc kubenswrapper[4970]: I0930 09:50:46.195335 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" event={"ID":"8b2851c0-fd5a-424e-bae0-c32ee9cb240e","Type":"ContainerStarted","Data":"ec01de7a27317cf0e4f536608d2ecfd257c35203b6c38d3485ec4414616f4a06"} Sep 30 09:50:46 crc kubenswrapper[4970]: I0930 09:50:46.195616 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:46 crc kubenswrapper[4970]: I0930 09:50:46.207188 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" Sep 30 09:50:46 crc kubenswrapper[4970]: I0930 09:50:46.238505 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56c7c74f4-6kmj7" podStartSLOduration=27.238465176 podStartE2EDuration="27.238465176s" podCreationTimestamp="2025-09-30 09:50:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:50:46.222387984 +0000 UTC m=+259.294238928" watchObservedRunningTime="2025-09-30 09:50:46.238465176 +0000 UTC m=+259.310316130" Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.811028 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrdxh"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.812846 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rrdxh" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="registry-server" containerID="cri-o://d65937ced1ac37265f872dc261dbdce7b4b5c270e7bf92cecdc35fb469184de2" gracePeriod=30 Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.820347 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjvrm"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.820679 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gjvrm" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="registry-server" containerID="cri-o://ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf" gracePeriod=30 Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.840918 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-27726"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.841368 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-27726" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerName="marketplace-operator" containerID="cri-o://c7b74aa6640f1403c6775a94d4a7e9f12ba07694385c2573c19f6e9c1ececd91" gracePeriod=30 Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.861773 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg42w"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.862116 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rg42w" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="registry-server" containerID="cri-o://8ea7410cbe711382738e9417de1c75bae7c86413c1b8980b4384062de52f2dba" gracePeriod=30 Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.867136 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlzvg"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.867565 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hlzvg" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="registry-server" containerID="cri-o://104fce426f0a448287432d5b80da2c139f0be3c1b147a9c4e6f2c1b0dc08a0dc" gracePeriod=30 Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.869326 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mc94p"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.870093 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.876345 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mc94p"] Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.996680 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-hlzvg" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="registry-server" probeResult="failure" output=< Sep 30 09:50:58 crc kubenswrapper[4970]: cancellation received Sep 30 09:50:58 crc kubenswrapper[4970]: error: failed to connect service at ":50051": context canceled Sep 30 09:50:58 crc kubenswrapper[4970]: > Sep 30 09:50:58 crc kubenswrapper[4970]: I0930 09:50:58.997893 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-hlzvg" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="registry-server" probeResult="failure" output=< Sep 30 09:50:58 crc kubenswrapper[4970]: cancellation received Sep 30 09:50:58 crc kubenswrapper[4970]: > Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.011419 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.011485 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj4bh\" (UniqueName: \"kubernetes.io/projected/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-kube-api-access-fj4bh\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.011520 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.113659 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.114210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj4bh\" (UniqueName: \"kubernetes.io/projected/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-kube-api-access-fj4bh\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.114236 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.124464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.129442 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.143355 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj4bh\" (UniqueName: \"kubernetes.io/projected/d71db2c5-c1c2-42f9-a89e-086c606b9e5f-kube-api-access-fj4bh\") pod \"marketplace-operator-79b997595-mc94p\" (UID: \"d71db2c5-c1c2-42f9-a89e-086c606b9e5f\") " pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.201745 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.286405 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.292449 4970 generic.go:334] "Generic (PLEG): container finished" podID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerID="d65937ced1ac37265f872dc261dbdce7b4b5c270e7bf92cecdc35fb469184de2" exitCode=0 Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.292571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrdxh" event={"ID":"aa02b988-0645-4466-a5bc-9c99033fdcdc","Type":"ContainerDied","Data":"d65937ced1ac37265f872dc261dbdce7b4b5c270e7bf92cecdc35fb469184de2"} Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.293876 4970 generic.go:334] "Generic (PLEG): container finished" podID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerID="c7b74aa6640f1403c6775a94d4a7e9f12ba07694385c2573c19f6e9c1ececd91" exitCode=0 Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.293936 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-27726" event={"ID":"14645e18-5ae5-40f7-b52f-591a49032bc0","Type":"ContainerDied","Data":"c7b74aa6640f1403c6775a94d4a7e9f12ba07694385c2573c19f6e9c1ececd91"} Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.295511 4970 generic.go:334] "Generic (PLEG): container finished" podID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerID="8ea7410cbe711382738e9417de1c75bae7c86413c1b8980b4384062de52f2dba" exitCode=0 Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.295550 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg42w" event={"ID":"8140e36b-113c-4dbb-982d-4f94ec7c0a5f","Type":"ContainerDied","Data":"8ea7410cbe711382738e9417de1c75bae7c86413c1b8980b4384062de52f2dba"} Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.298526 4970 generic.go:334] "Generic (PLEG): container finished" podID="a516da21-a8ba-423e-85ae-49cb3383e933" containerID="ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf" exitCode=0 Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.298571 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjvrm" event={"ID":"a516da21-a8ba-423e-85ae-49cb3383e933","Type":"ContainerDied","Data":"ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf"} Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.298589 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gjvrm" event={"ID":"a516da21-a8ba-423e-85ae-49cb3383e933","Type":"ContainerDied","Data":"adac14ad5537208dc57ffc8b51e5566e3c384d9f1731e20f297765aa584afd81"} Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.298607 4970 scope.go:117] "RemoveContainer" containerID="ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.298745 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gjvrm" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.305044 4970 generic.go:334] "Generic (PLEG): container finished" podID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerID="104fce426f0a448287432d5b80da2c139f0be3c1b147a9c4e6f2c1b0dc08a0dc" exitCode=0 Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.305076 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerDied","Data":"104fce426f0a448287432d5b80da2c139f0be3c1b147a9c4e6f2c1b0dc08a0dc"} Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.331425 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.340677 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.342577 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.360419 4970 scope.go:117] "RemoveContainer" containerID="15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.411713 4970 scope.go:117] "RemoveContainer" containerID="12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.442619 4970 scope.go:117] "RemoveContainer" containerID="ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443279 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrfh5\" (UniqueName: \"kubernetes.io/projected/14645e18-5ae5-40f7-b52f-591a49032bc0-kube-api-access-jrfh5\") pod \"14645e18-5ae5-40f7-b52f-591a49032bc0\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443353 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-operator-metrics\") pod \"14645e18-5ae5-40f7-b52f-591a49032bc0\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443383 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-catalog-content\") pod \"aa02b988-0645-4466-a5bc-9c99033fdcdc\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443522 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-utilities\") pod \"aa02b988-0645-4466-a5bc-9c99033fdcdc\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443627 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-utilities\") pod \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443667 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-trusted-ca\") pod \"14645e18-5ae5-40f7-b52f-591a49032bc0\" (UID: \"14645e18-5ae5-40f7-b52f-591a49032bc0\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443701 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-catalog-content\") pod \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443725 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rtcw\" (UniqueName: \"kubernetes.io/projected/aa02b988-0645-4466-a5bc-9c99033fdcdc-kube-api-access-5rtcw\") pod \"aa02b988-0645-4466-a5bc-9c99033fdcdc\" (UID: \"aa02b988-0645-4466-a5bc-9c99033fdcdc\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443782 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n67j7\" (UniqueName: \"kubernetes.io/projected/a516da21-a8ba-423e-85ae-49cb3383e933-kube-api-access-n67j7\") pod \"a516da21-a8ba-423e-85ae-49cb3383e933\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443802 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb8r7\" (UniqueName: \"kubernetes.io/projected/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-kube-api-access-cb8r7\") pod \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\" (UID: \"8140e36b-113c-4dbb-982d-4f94ec7c0a5f\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443846 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-catalog-content\") pod \"a516da21-a8ba-423e-85ae-49cb3383e933\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.443875 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-utilities\") pod \"a516da21-a8ba-423e-85ae-49cb3383e933\" (UID: \"a516da21-a8ba-423e-85ae-49cb3383e933\") " Sep 30 09:50:59 crc kubenswrapper[4970]: E0930 09:50:59.444494 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf\": container with ID starting with ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf not found: ID does not exist" containerID="ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.444583 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf"} err="failed to get container status \"ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf\": rpc error: code = NotFound desc = could not find container \"ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf\": container with ID starting with ec0560f41c6d9e2abb4ad43e26d0418679550f95421ccf2257706d684f6d96bf not found: ID does not exist" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.444716 4970 scope.go:117] "RemoveContainer" containerID="15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.445261 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "14645e18-5ae5-40f7-b52f-591a49032bc0" (UID: "14645e18-5ae5-40f7-b52f-591a49032bc0"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: E0930 09:50:59.446429 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b\": container with ID starting with 15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b not found: ID does not exist" containerID="15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.446472 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b"} err="failed to get container status \"15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b\": rpc error: code = NotFound desc = could not find container \"15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b\": container with ID starting with 15cf63865d78488fe440d8b764f33b5af90d9836fb67a1e48f9ce3c5e217c83b not found: ID does not exist" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.446465 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-utilities" (OuterVolumeSpecName: "utilities") pod "aa02b988-0645-4466-a5bc-9c99033fdcdc" (UID: "aa02b988-0645-4466-a5bc-9c99033fdcdc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.446505 4970 scope.go:117] "RemoveContainer" containerID="12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.446607 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-utilities" (OuterVolumeSpecName: "utilities") pod "a516da21-a8ba-423e-85ae-49cb3383e933" (UID: "a516da21-a8ba-423e-85ae-49cb3383e933"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.447049 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-utilities" (OuterVolumeSpecName: "utilities") pod "8140e36b-113c-4dbb-982d-4f94ec7c0a5f" (UID: "8140e36b-113c-4dbb-982d-4f94ec7c0a5f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.452935 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a516da21-a8ba-423e-85ae-49cb3383e933-kube-api-access-n67j7" (OuterVolumeSpecName: "kube-api-access-n67j7") pod "a516da21-a8ba-423e-85ae-49cb3383e933" (UID: "a516da21-a8ba-423e-85ae-49cb3383e933"). InnerVolumeSpecName "kube-api-access-n67j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: E0930 09:50:59.453360 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8\": container with ID starting with 12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8 not found: ID does not exist" containerID="12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.453386 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8"} err="failed to get container status \"12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8\": rpc error: code = NotFound desc = could not find container \"12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8\": container with ID starting with 12f8c4c14fabeaa2f58680b0e96368465278254ac39fac47d2c7bdb1b87017a8 not found: ID does not exist" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.453909 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "14645e18-5ae5-40f7-b52f-591a49032bc0" (UID: "14645e18-5ae5-40f7-b52f-591a49032bc0"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.454835 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa02b988-0645-4466-a5bc-9c99033fdcdc-kube-api-access-5rtcw" (OuterVolumeSpecName: "kube-api-access-5rtcw") pod "aa02b988-0645-4466-a5bc-9c99033fdcdc" (UID: "aa02b988-0645-4466-a5bc-9c99033fdcdc"). InnerVolumeSpecName "kube-api-access-5rtcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.455658 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14645e18-5ae5-40f7-b52f-591a49032bc0-kube-api-access-jrfh5" (OuterVolumeSpecName: "kube-api-access-jrfh5") pod "14645e18-5ae5-40f7-b52f-591a49032bc0" (UID: "14645e18-5ae5-40f7-b52f-591a49032bc0"). InnerVolumeSpecName "kube-api-access-jrfh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.455843 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.457759 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-kube-api-access-cb8r7" (OuterVolumeSpecName: "kube-api-access-cb8r7") pod "8140e36b-113c-4dbb-982d-4f94ec7c0a5f" (UID: "8140e36b-113c-4dbb-982d-4f94ec7c0a5f"). InnerVolumeSpecName "kube-api-access-cb8r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.492294 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8140e36b-113c-4dbb-982d-4f94ec7c0a5f" (UID: "8140e36b-113c-4dbb-982d-4f94ec7c0a5f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.519436 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mc94p"] Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.528685 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a516da21-a8ba-423e-85ae-49cb3383e933" (UID: "a516da21-a8ba-423e-85ae-49cb3383e933"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: W0930 09:50:59.534157 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd71db2c5_c1c2_42f9_a89e_086c606b9e5f.slice/crio-3869922c5bdf37a2c072a0fcef80ab8945af6bc9d4751e85efd040f93ec7668a WatchSource:0}: Error finding container 3869922c5bdf37a2c072a0fcef80ab8945af6bc9d4751e85efd040f93ec7668a: Status 404 returned error can't find the container with id 3869922c5bdf37a2c072a0fcef80ab8945af6bc9d4751e85efd040f93ec7668a Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.539480 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa02b988-0645-4466-a5bc-9c99033fdcdc" (UID: "aa02b988-0645-4466-a5bc-9c99033fdcdc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.544979 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tszq6\" (UniqueName: \"kubernetes.io/projected/c3b26d6c-14bb-4572-87c8-6da4476f88a4-kube-api-access-tszq6\") pod \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545176 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-catalog-content\") pod \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545238 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-utilities\") pod \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\" (UID: \"c3b26d6c-14bb-4572-87c8-6da4476f88a4\") " Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545515 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545536 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a516da21-a8ba-423e-85ae-49cb3383e933-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545546 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrfh5\" (UniqueName: \"kubernetes.io/projected/14645e18-5ae5-40f7-b52f-591a49032bc0-kube-api-access-jrfh5\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545561 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545575 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545584 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa02b988-0645-4466-a5bc-9c99033fdcdc-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545593 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545605 4970 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/14645e18-5ae5-40f7-b52f-591a49032bc0-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545612 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545621 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rtcw\" (UniqueName: \"kubernetes.io/projected/aa02b988-0645-4466-a5bc-9c99033fdcdc-kube-api-access-5rtcw\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545630 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb8r7\" (UniqueName: \"kubernetes.io/projected/8140e36b-113c-4dbb-982d-4f94ec7c0a5f-kube-api-access-cb8r7\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.545639 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n67j7\" (UniqueName: \"kubernetes.io/projected/a516da21-a8ba-423e-85ae-49cb3383e933-kube-api-access-n67j7\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.546136 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-utilities" (OuterVolumeSpecName: "utilities") pod "c3b26d6c-14bb-4572-87c8-6da4476f88a4" (UID: "c3b26d6c-14bb-4572-87c8-6da4476f88a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.555616 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3b26d6c-14bb-4572-87c8-6da4476f88a4-kube-api-access-tszq6" (OuterVolumeSpecName: "kube-api-access-tszq6") pod "c3b26d6c-14bb-4572-87c8-6da4476f88a4" (UID: "c3b26d6c-14bb-4572-87c8-6da4476f88a4"). InnerVolumeSpecName "kube-api-access-tszq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.636665 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gjvrm"] Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.643106 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gjvrm"] Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.647416 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tszq6\" (UniqueName: \"kubernetes.io/projected/c3b26d6c-14bb-4572-87c8-6da4476f88a4-kube-api-access-tszq6\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.647458 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.660301 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3b26d6c-14bb-4572-87c8-6da4476f88a4" (UID: "c3b26d6c-14bb-4572-87c8-6da4476f88a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.679035 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" path="/var/lib/kubelet/pods/a516da21-a8ba-423e-85ae-49cb3383e933/volumes" Sep 30 09:50:59 crc kubenswrapper[4970]: I0930 09:50:59.748550 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3b26d6c-14bb-4572-87c8-6da4476f88a4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.313231 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hlzvg" event={"ID":"c3b26d6c-14bb-4572-87c8-6da4476f88a4","Type":"ContainerDied","Data":"a04c8ae3a4a6d408b89172cbce2c3ff13898e3d06e1468741aff0fa3fec61443"} Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.313291 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hlzvg" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.313669 4970 scope.go:117] "RemoveContainer" containerID="104fce426f0a448287432d5b80da2c139f0be3c1b147a9c4e6f2c1b0dc08a0dc" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.315961 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" event={"ID":"d71db2c5-c1c2-42f9-a89e-086c606b9e5f","Type":"ContainerStarted","Data":"c24ea2c326de9a99f59320ed512f702827c49fe5dd67e30dd376728dfe594f5f"} Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.316025 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" event={"ID":"d71db2c5-c1c2-42f9-a89e-086c606b9e5f","Type":"ContainerStarted","Data":"3869922c5bdf37a2c072a0fcef80ab8945af6bc9d4751e85efd040f93ec7668a"} Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.316233 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.320813 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.327297 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rrdxh" event={"ID":"aa02b988-0645-4466-a5bc-9c99033fdcdc","Type":"ContainerDied","Data":"4c9bb8a4a72058caa8e61fc50ee7d66c2e1783e13e2043fd08b9d09034a9dbe1"} Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.327330 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rrdxh" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.330828 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-27726" event={"ID":"14645e18-5ae5-40f7-b52f-591a49032bc0","Type":"ContainerDied","Data":"85c49d47f6955adb3c5e35c0011336abe00ba0867f280e16b254c573b3cead70"} Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.330952 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-27726" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.334072 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rg42w" event={"ID":"8140e36b-113c-4dbb-982d-4f94ec7c0a5f","Type":"ContainerDied","Data":"66c64563b323311592c2ae1ae9c677c979692a48fcb58022be65b5a9b7794fd7"} Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.334359 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rg42w" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.346150 4970 scope.go:117] "RemoveContainer" containerID="d1c854b3050c7876390ca8b777641121a83a0d05bc27bee2abad7ab7706d2177" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.347594 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mc94p" podStartSLOduration=2.347561911 podStartE2EDuration="2.347561911s" podCreationTimestamp="2025-09-30 09:50:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:51:00.339976001 +0000 UTC m=+273.411826935" watchObservedRunningTime="2025-09-30 09:51:00.347561911 +0000 UTC m=+273.419412855" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.359712 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hlzvg"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.364249 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hlzvg"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.370843 4970 scope.go:117] "RemoveContainer" containerID="73043786e2365e4eee1df9a567002faddb9e0514af6a71010734961ca5e65720" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.394748 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-27726"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.421359 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-27726"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.425706 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rrdxh"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.431859 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rrdxh"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.446722 4970 scope.go:117] "RemoveContainer" containerID="d65937ced1ac37265f872dc261dbdce7b4b5c270e7bf92cecdc35fb469184de2" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.447246 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg42w"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.454233 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rg42w"] Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.466103 4970 scope.go:117] "RemoveContainer" containerID="76dd372d46773f0c7132b9f8d8a900dedcbc2ad6c8ca7faacc3fc44691ba8895" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.504034 4970 scope.go:117] "RemoveContainer" containerID="f63dc99b44c9cfd8d631c9844dd7061c190b1d5feee606a8440cac1e81285951" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.531203 4970 scope.go:117] "RemoveContainer" containerID="c7b74aa6640f1403c6775a94d4a7e9f12ba07694385c2573c19f6e9c1ececd91" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.551243 4970 scope.go:117] "RemoveContainer" containerID="8ea7410cbe711382738e9417de1c75bae7c86413c1b8980b4384062de52f2dba" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.573909 4970 scope.go:117] "RemoveContainer" containerID="e1b46706c4b0bc69c713985077f7ff82f23d7ec17e4e428d3cc4225bb5465dda" Sep 30 09:51:00 crc kubenswrapper[4970]: I0930 09:51:00.588107 4970 scope.go:117] "RemoveContainer" containerID="7663d250ec3a52f1c802941aa6982362e0ce1df763bda7a70187fc136f766e09" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.029593 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fflcj"] Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030431 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerName="marketplace-operator" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030449 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerName="marketplace-operator" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030463 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030471 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030483 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030490 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030501 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030509 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030520 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030569 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030692 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030701 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030746 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030755 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.030765 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.030772 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.032136 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032201 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.032242 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032253 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.032267 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032276 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="extract-utilities" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.032298 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032307 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: E0930 09:51:01.032325 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032334 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="extract-content" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032633 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032656 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a516da21-a8ba-423e-85ae-49cb3383e933" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032678 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032692 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" containerName="registry-server" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.032701 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" containerName="marketplace-operator" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.033823 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.040031 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.047859 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fflcj"] Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.186735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab7817b-f28e-447d-98f5-8fb66262d7ec-catalog-content\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.187178 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab7817b-f28e-447d-98f5-8fb66262d7ec-utilities\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.187313 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8hgw\" (UniqueName: \"kubernetes.io/projected/bab7817b-f28e-447d-98f5-8fb66262d7ec-kube-api-access-l8hgw\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.229389 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pk6ch"] Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.231285 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.233754 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.242933 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pk6ch"] Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.288559 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab7817b-f28e-447d-98f5-8fb66262d7ec-catalog-content\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.288615 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab7817b-f28e-447d-98f5-8fb66262d7ec-utilities\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.288668 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8hgw\" (UniqueName: \"kubernetes.io/projected/bab7817b-f28e-447d-98f5-8fb66262d7ec-kube-api-access-l8hgw\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.289153 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bab7817b-f28e-447d-98f5-8fb66262d7ec-catalog-content\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.289339 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bab7817b-f28e-447d-98f5-8fb66262d7ec-utilities\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.320928 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8hgw\" (UniqueName: \"kubernetes.io/projected/bab7817b-f28e-447d-98f5-8fb66262d7ec-kube-api-access-l8hgw\") pod \"redhat-marketplace-fflcj\" (UID: \"bab7817b-f28e-447d-98f5-8fb66262d7ec\") " pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.359649 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.392735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpgrw\" (UniqueName: \"kubernetes.io/projected/95230c16-a1df-4406-8b31-e350c1981055-kube-api-access-kpgrw\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.392801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-catalog-content\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.392852 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-utilities\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.495788 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-utilities\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.496500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-utilities\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.496593 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpgrw\" (UniqueName: \"kubernetes.io/projected/95230c16-a1df-4406-8b31-e350c1981055-kube-api-access-kpgrw\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.496640 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-catalog-content\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.498327 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-catalog-content\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.520764 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpgrw\" (UniqueName: \"kubernetes.io/projected/95230c16-a1df-4406-8b31-e350c1981055-kube-api-access-kpgrw\") pod \"redhat-operators-pk6ch\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.560707 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.573553 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fflcj"] Sep 30 09:51:01 crc kubenswrapper[4970]: W0930 09:51:01.582034 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab7817b_f28e_447d_98f5_8fb66262d7ec.slice/crio-706d240c88bbc1566bdb216d030a7b43cea1eab15f4d29b40090d0f1c540b436 WatchSource:0}: Error finding container 706d240c88bbc1566bdb216d030a7b43cea1eab15f4d29b40090d0f1c540b436: Status 404 returned error can't find the container with id 706d240c88bbc1566bdb216d030a7b43cea1eab15f4d29b40090d0f1c540b436 Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.674908 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14645e18-5ae5-40f7-b52f-591a49032bc0" path="/var/lib/kubelet/pods/14645e18-5ae5-40f7-b52f-591a49032bc0/volumes" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.675463 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8140e36b-113c-4dbb-982d-4f94ec7c0a5f" path="/var/lib/kubelet/pods/8140e36b-113c-4dbb-982d-4f94ec7c0a5f/volumes" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.676103 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa02b988-0645-4466-a5bc-9c99033fdcdc" path="/var/lib/kubelet/pods/aa02b988-0645-4466-a5bc-9c99033fdcdc/volumes" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.677337 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3b26d6c-14bb-4572-87c8-6da4476f88a4" path="/var/lib/kubelet/pods/c3b26d6c-14bb-4572-87c8-6da4476f88a4/volumes" Sep 30 09:51:01 crc kubenswrapper[4970]: I0930 09:51:01.772576 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pk6ch"] Sep 30 09:51:01 crc kubenswrapper[4970]: W0930 09:51:01.831146 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95230c16_a1df_4406_8b31_e350c1981055.slice/crio-894ae25a4379ba49703b01916ebc815adf6c094b9a51b841ca3dda4dd1b43257 WatchSource:0}: Error finding container 894ae25a4379ba49703b01916ebc815adf6c094b9a51b841ca3dda4dd1b43257: Status 404 returned error can't find the container with id 894ae25a4379ba49703b01916ebc815adf6c094b9a51b841ca3dda4dd1b43257 Sep 30 09:51:02 crc kubenswrapper[4970]: I0930 09:51:02.359385 4970 generic.go:334] "Generic (PLEG): container finished" podID="bab7817b-f28e-447d-98f5-8fb66262d7ec" containerID="10247bfd9e3581912574a43de08e869f8271e5fa058ed86cf4e296d4907ba28d" exitCode=0 Sep 30 09:51:02 crc kubenswrapper[4970]: I0930 09:51:02.359436 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fflcj" event={"ID":"bab7817b-f28e-447d-98f5-8fb66262d7ec","Type":"ContainerDied","Data":"10247bfd9e3581912574a43de08e869f8271e5fa058ed86cf4e296d4907ba28d"} Sep 30 09:51:02 crc kubenswrapper[4970]: I0930 09:51:02.359484 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fflcj" event={"ID":"bab7817b-f28e-447d-98f5-8fb66262d7ec","Type":"ContainerStarted","Data":"706d240c88bbc1566bdb216d030a7b43cea1eab15f4d29b40090d0f1c540b436"} Sep 30 09:51:02 crc kubenswrapper[4970]: I0930 09:51:02.361515 4970 generic.go:334] "Generic (PLEG): container finished" podID="95230c16-a1df-4406-8b31-e350c1981055" containerID="65f2fdb0f937a6ad0033a2e83c6fd2deccf15097b87723328f40f6975bc2813d" exitCode=0 Sep 30 09:51:02 crc kubenswrapper[4970]: I0930 09:51:02.362173 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk6ch" event={"ID":"95230c16-a1df-4406-8b31-e350c1981055","Type":"ContainerDied","Data":"65f2fdb0f937a6ad0033a2e83c6fd2deccf15097b87723328f40f6975bc2813d"} Sep 30 09:51:02 crc kubenswrapper[4970]: I0930 09:51:02.362210 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk6ch" event={"ID":"95230c16-a1df-4406-8b31-e350c1981055","Type":"ContainerStarted","Data":"894ae25a4379ba49703b01916ebc815adf6c094b9a51b841ca3dda4dd1b43257"} Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.445978 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wwzc"] Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.447632 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.451123 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wwzc"] Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.451143 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.631922 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-utilities\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.635219 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qbk2\" (UniqueName: \"kubernetes.io/projected/4281f20f-ca65-49c5-9217-b9a730147510-kube-api-access-9qbk2\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.635303 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-catalog-content\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.642103 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ft8rk"] Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.644031 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.692150 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.702057 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ft8rk"] Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.736871 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qbk2\" (UniqueName: \"kubernetes.io/projected/4281f20f-ca65-49c5-9217-b9a730147510-kube-api-access-9qbk2\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.737530 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-catalog-content\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.738658 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-catalog-content\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.738746 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-utilities\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.740044 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-utilities\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.763134 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qbk2\" (UniqueName: \"kubernetes.io/projected/4281f20f-ca65-49c5-9217-b9a730147510-kube-api-access-9qbk2\") pod \"certified-operators-5wwzc\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.804261 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.839226 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af6628-add8-425c-b470-b8c413f69624-utilities\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.839746 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpfgp\" (UniqueName: \"kubernetes.io/projected/b1af6628-add8-425c-b470-b8c413f69624-kube-api-access-cpfgp\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.839770 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af6628-add8-425c-b470-b8c413f69624-catalog-content\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.941301 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af6628-add8-425c-b470-b8c413f69624-catalog-content\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.941497 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af6628-add8-425c-b470-b8c413f69624-utilities\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.941555 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpfgp\" (UniqueName: \"kubernetes.io/projected/b1af6628-add8-425c-b470-b8c413f69624-kube-api-access-cpfgp\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.941962 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1af6628-add8-425c-b470-b8c413f69624-catalog-content\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.942096 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1af6628-add8-425c-b470-b8c413f69624-utilities\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:03 crc kubenswrapper[4970]: I0930 09:51:03.963785 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpfgp\" (UniqueName: \"kubernetes.io/projected/b1af6628-add8-425c-b470-b8c413f69624-kube-api-access-cpfgp\") pod \"community-operators-ft8rk\" (UID: \"b1af6628-add8-425c-b470-b8c413f69624\") " pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.029334 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wwzc"] Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.081796 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.375725 4970 generic.go:334] "Generic (PLEG): container finished" podID="4281f20f-ca65-49c5-9217-b9a730147510" containerID="bdc5adfd9a10d82e2ef3d211faa4c1388190462aee9bd1f7cd9cd9cb7966c168" exitCode=0 Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.375849 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwzc" event={"ID":"4281f20f-ca65-49c5-9217-b9a730147510","Type":"ContainerDied","Data":"bdc5adfd9a10d82e2ef3d211faa4c1388190462aee9bd1f7cd9cd9cb7966c168"} Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.375904 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwzc" event={"ID":"4281f20f-ca65-49c5-9217-b9a730147510","Type":"ContainerStarted","Data":"fe5ae57e7a87b34f0d4076c3552c6a43d68442c42502b2430362834c61a4f24a"} Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.379952 4970 generic.go:334] "Generic (PLEG): container finished" podID="bab7817b-f28e-447d-98f5-8fb66262d7ec" containerID="5f4f48454585c1dc1f7aaff76bf7da8b17a08fdeecb263f7f8aa198b4bd30aec" exitCode=0 Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.380047 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fflcj" event={"ID":"bab7817b-f28e-447d-98f5-8fb66262d7ec","Type":"ContainerDied","Data":"5f4f48454585c1dc1f7aaff76bf7da8b17a08fdeecb263f7f8aa198b4bd30aec"} Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.390743 4970 generic.go:334] "Generic (PLEG): container finished" podID="95230c16-a1df-4406-8b31-e350c1981055" containerID="f37c7f8db310539953aa04392c292947977e9177cbc7ba321c306f5d9b405439" exitCode=0 Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.390803 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk6ch" event={"ID":"95230c16-a1df-4406-8b31-e350c1981055","Type":"ContainerDied","Data":"f37c7f8db310539953aa04392c292947977e9177cbc7ba321c306f5d9b405439"} Sep 30 09:51:04 crc kubenswrapper[4970]: I0930 09:51:04.530948 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ft8rk"] Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.398541 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fflcj" event={"ID":"bab7817b-f28e-447d-98f5-8fb66262d7ec","Type":"ContainerStarted","Data":"ed3947d1eec1e531937180a29b003a6f604d497dfa3ea73286c50b4c50a07b9d"} Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.406802 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk6ch" event={"ID":"95230c16-a1df-4406-8b31-e350c1981055","Type":"ContainerStarted","Data":"4c1be12ffd710043b398885315bdffc12465177db26e23e3624fce9b9bd2363f"} Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.408665 4970 generic.go:334] "Generic (PLEG): container finished" podID="b1af6628-add8-425c-b470-b8c413f69624" containerID="4f0103b865052e86320ce3bf3666f726f428aa47f0935e7b55981c0204aad7d7" exitCode=0 Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.408722 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft8rk" event={"ID":"b1af6628-add8-425c-b470-b8c413f69624","Type":"ContainerDied","Data":"4f0103b865052e86320ce3bf3666f726f428aa47f0935e7b55981c0204aad7d7"} Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.408755 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft8rk" event={"ID":"b1af6628-add8-425c-b470-b8c413f69624","Type":"ContainerStarted","Data":"3e4f1106fc17df73493287aec9f4209a14cc4c3cd24fe34a235b35884751ddcd"} Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.420642 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fflcj" podStartSLOduration=1.924224607 podStartE2EDuration="4.42061349s" podCreationTimestamp="2025-09-30 09:51:01 +0000 UTC" firstStartedPulling="2025-09-30 09:51:02.362863758 +0000 UTC m=+275.434714692" lastFinishedPulling="2025-09-30 09:51:04.859252611 +0000 UTC m=+277.931103575" observedRunningTime="2025-09-30 09:51:05.418329901 +0000 UTC m=+278.490180835" watchObservedRunningTime="2025-09-30 09:51:05.42061349 +0000 UTC m=+278.492464424" Sep 30 09:51:05 crc kubenswrapper[4970]: I0930 09:51:05.463104 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pk6ch" podStartSLOduration=2.040220271 podStartE2EDuration="4.463080493s" podCreationTimestamp="2025-09-30 09:51:01 +0000 UTC" firstStartedPulling="2025-09-30 09:51:02.365165917 +0000 UTC m=+275.437016851" lastFinishedPulling="2025-09-30 09:51:04.788026149 +0000 UTC m=+277.859877073" observedRunningTime="2025-09-30 09:51:05.460927438 +0000 UTC m=+278.532778372" watchObservedRunningTime="2025-09-30 09:51:05.463080493 +0000 UTC m=+278.534931437" Sep 30 09:51:06 crc kubenswrapper[4970]: I0930 09:51:06.418307 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft8rk" event={"ID":"b1af6628-add8-425c-b470-b8c413f69624","Type":"ContainerStarted","Data":"7eeca94c8d9bc8a1c9108ec6e90a0e7e155bd3fbb00e6f747f7d7b6daf3d0b65"} Sep 30 09:51:06 crc kubenswrapper[4970]: I0930 09:51:06.422251 4970 generic.go:334] "Generic (PLEG): container finished" podID="4281f20f-ca65-49c5-9217-b9a730147510" containerID="cad3284927321d7a4c6da0e2a19371f82e28e217909fbb8aaec78bf6c4b84bc0" exitCode=0 Sep 30 09:51:06 crc kubenswrapper[4970]: I0930 09:51:06.422344 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwzc" event={"ID":"4281f20f-ca65-49c5-9217-b9a730147510","Type":"ContainerDied","Data":"cad3284927321d7a4c6da0e2a19371f82e28e217909fbb8aaec78bf6c4b84bc0"} Sep 30 09:51:07 crc kubenswrapper[4970]: I0930 09:51:07.431560 4970 generic.go:334] "Generic (PLEG): container finished" podID="b1af6628-add8-425c-b470-b8c413f69624" containerID="7eeca94c8d9bc8a1c9108ec6e90a0e7e155bd3fbb00e6f747f7d7b6daf3d0b65" exitCode=0 Sep 30 09:51:07 crc kubenswrapper[4970]: I0930 09:51:07.432438 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft8rk" event={"ID":"b1af6628-add8-425c-b470-b8c413f69624","Type":"ContainerDied","Data":"7eeca94c8d9bc8a1c9108ec6e90a0e7e155bd3fbb00e6f747f7d7b6daf3d0b65"} Sep 30 09:51:08 crc kubenswrapper[4970]: I0930 09:51:08.441203 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ft8rk" event={"ID":"b1af6628-add8-425c-b470-b8c413f69624","Type":"ContainerStarted","Data":"c09c55d87128433d953704df3532c35468ffe5e7b2bba586a143b070fee4762a"} Sep 30 09:51:08 crc kubenswrapper[4970]: I0930 09:51:08.444461 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwzc" event={"ID":"4281f20f-ca65-49c5-9217-b9a730147510","Type":"ContainerStarted","Data":"5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a"} Sep 30 09:51:08 crc kubenswrapper[4970]: I0930 09:51:08.464298 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ft8rk" podStartSLOduration=3.039666192 podStartE2EDuration="5.464282416s" podCreationTimestamp="2025-09-30 09:51:03 +0000 UTC" firstStartedPulling="2025-09-30 09:51:05.410496114 +0000 UTC m=+278.482347048" lastFinishedPulling="2025-09-30 09:51:07.835112318 +0000 UTC m=+280.906963272" observedRunningTime="2025-09-30 09:51:08.461687138 +0000 UTC m=+281.533538072" watchObservedRunningTime="2025-09-30 09:51:08.464282416 +0000 UTC m=+281.536133350" Sep 30 09:51:08 crc kubenswrapper[4970]: I0930 09:51:08.486097 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wwzc" podStartSLOduration=3.010923705 podStartE2EDuration="5.486077885s" podCreationTimestamp="2025-09-30 09:51:03 +0000 UTC" firstStartedPulling="2025-09-30 09:51:04.378536437 +0000 UTC m=+277.450387371" lastFinishedPulling="2025-09-30 09:51:06.853690617 +0000 UTC m=+279.925541551" observedRunningTime="2025-09-30 09:51:08.481889818 +0000 UTC m=+281.553740752" watchObservedRunningTime="2025-09-30 09:51:08.486077885 +0000 UTC m=+281.557928819" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.359861 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.360474 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.443191 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.512880 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fflcj" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.578440 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.578502 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:11 crc kubenswrapper[4970]: I0930 09:51:11.626832 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:12 crc kubenswrapper[4970]: I0930 09:51:12.516468 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 09:51:13 crc kubenswrapper[4970]: I0930 09:51:13.805214 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:13 crc kubenswrapper[4970]: I0930 09:51:13.805551 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:13 crc kubenswrapper[4970]: I0930 09:51:13.846580 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:51:14 crc kubenswrapper[4970]: I0930 09:51:14.083297 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:14 crc kubenswrapper[4970]: I0930 09:51:14.083408 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:14 crc kubenswrapper[4970]: I0930 09:51:14.133534 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:14 crc kubenswrapper[4970]: I0930 09:51:14.526738 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ft8rk" Sep 30 09:51:14 crc kubenswrapper[4970]: I0930 09:51:14.527809 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 09:52:34 crc kubenswrapper[4970]: I0930 09:52:34.821521 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:52:34 crc kubenswrapper[4970]: I0930 09:52:34.822500 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.071245 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4hrf6"] Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.073070 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.080767 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4hrf6"] Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.224900 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-registry-certificates\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225013 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225272 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225409 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225501 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-bound-sa-token\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225550 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-trusted-ca\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225589 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jq8m\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-kube-api-access-7jq8m\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.225673 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-registry-tls\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.255143 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326583 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-registry-tls\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326637 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-registry-certificates\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326697 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326740 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326766 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-bound-sa-token\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326806 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-trusted-ca\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.326836 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jq8m\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-kube-api-access-7jq8m\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.327855 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-registry-certificates\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.328148 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-trusted-ca\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.328615 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.334424 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.334710 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-registry-tls\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.344272 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-bound-sa-token\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.344691 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jq8m\" (UniqueName: \"kubernetes.io/projected/6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb-kube-api-access-7jq8m\") pod \"image-registry-66df7c8f76-4hrf6\" (UID: \"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb\") " pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.398378 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.610658 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4hrf6"] Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.821242 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:53:04 crc kubenswrapper[4970]: I0930 09:53:04.821344 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:53:05 crc kubenswrapper[4970]: I0930 09:53:05.202891 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" event={"ID":"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb","Type":"ContainerStarted","Data":"65da5be821ddfe1a499cbb5378ffb4a1c5ef1bb37b85023d733aba03a965c89e"} Sep 30 09:53:05 crc kubenswrapper[4970]: I0930 09:53:05.202959 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" event={"ID":"6349d73c-5b7c-4cad-a7d7-b9e3cb5a7adb","Type":"ContainerStarted","Data":"35e5ccd6584302d76bb73cf3239dd9b9916541fa48be0566b40b8b5edcaaad98"} Sep 30 09:53:05 crc kubenswrapper[4970]: I0930 09:53:05.203430 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:05 crc kubenswrapper[4970]: I0930 09:53:05.225345 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" podStartSLOduration=1.225316788 podStartE2EDuration="1.225316788s" podCreationTimestamp="2025-09-30 09:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:53:05.224314959 +0000 UTC m=+398.296165933" watchObservedRunningTime="2025-09-30 09:53:05.225316788 +0000 UTC m=+398.297167722" Sep 30 09:53:24 crc kubenswrapper[4970]: I0930 09:53:24.404720 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4hrf6" Sep 30 09:53:24 crc kubenswrapper[4970]: I0930 09:53:24.478524 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vjppp"] Sep 30 09:53:34 crc kubenswrapper[4970]: I0930 09:53:34.821952 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:53:34 crc kubenswrapper[4970]: I0930 09:53:34.822774 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:53:34 crc kubenswrapper[4970]: I0930 09:53:34.822871 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:53:34 crc kubenswrapper[4970]: I0930 09:53:34.824030 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3256d43a82b3895388a142b25e718f337799e9f30680508a3d1264e9ba385ed1"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 09:53:34 crc kubenswrapper[4970]: I0930 09:53:34.824179 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://3256d43a82b3895388a142b25e718f337799e9f30680508a3d1264e9ba385ed1" gracePeriod=600 Sep 30 09:53:35 crc kubenswrapper[4970]: I0930 09:53:35.413841 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="3256d43a82b3895388a142b25e718f337799e9f30680508a3d1264e9ba385ed1" exitCode=0 Sep 30 09:53:35 crc kubenswrapper[4970]: I0930 09:53:35.414847 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"3256d43a82b3895388a142b25e718f337799e9f30680508a3d1264e9ba385ed1"} Sep 30 09:53:35 crc kubenswrapper[4970]: I0930 09:53:35.414895 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"dc4842ae8bfd4b8ab89c01d01940a6e7946834cd5975d69ed84b67c89bbc8813"} Sep 30 09:53:35 crc kubenswrapper[4970]: I0930 09:53:35.414916 4970 scope.go:117] "RemoveContainer" containerID="d08bbb5e33815c3d41adea39d0f8df112e1729628f456eec5df2b3974020b1ba" Sep 30 09:53:49 crc kubenswrapper[4970]: I0930 09:53:49.521058 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" podUID="0f2c73d3-00d3-491e-8050-fe9f69126993" containerName="registry" containerID="cri-o://afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf" gracePeriod=30 Sep 30 09:53:49 crc kubenswrapper[4970]: I0930 09:53:49.935517 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.032107 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f2c73d3-00d3-491e-8050-fe9f69126993-installation-pull-secrets\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.032344 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-certificates\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.032499 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-bound-sa-token\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.032568 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-trusted-ca\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.032646 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-tls\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.033387 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f2c73d3-00d3-491e-8050-fe9f69126993-ca-trust-extracted\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.034541 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq4b6\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-kube-api-access-mq4b6\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.033442 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.033458 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.034855 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"0f2c73d3-00d3-491e-8050-fe9f69126993\" (UID: \"0f2c73d3-00d3-491e-8050-fe9f69126993\") " Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.035563 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-trusted-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.035591 4970 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-certificates\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.039692 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.051560 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-kube-api-access-mq4b6" (OuterVolumeSpecName: "kube-api-access-mq4b6") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "kube-api-access-mq4b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.052327 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.052380 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f2c73d3-00d3-491e-8050-fe9f69126993-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.053199 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.062396 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f2c73d3-00d3-491e-8050-fe9f69126993-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "0f2c73d3-00d3-491e-8050-fe9f69126993" (UID: "0f2c73d3-00d3-491e-8050-fe9f69126993"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.136923 4970 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/0f2c73d3-00d3-491e-8050-fe9f69126993-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.137392 4970 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-bound-sa-token\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.137406 4970 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-registry-tls\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.137416 4970 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/0f2c73d3-00d3-491e-8050-fe9f69126993-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.137425 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq4b6\" (UniqueName: \"kubernetes.io/projected/0f2c73d3-00d3-491e-8050-fe9f69126993-kube-api-access-mq4b6\") on node \"crc\" DevicePath \"\"" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.523158 4970 generic.go:334] "Generic (PLEG): container finished" podID="0f2c73d3-00d3-491e-8050-fe9f69126993" containerID="afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf" exitCode=0 Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.523257 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.523245 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" event={"ID":"0f2c73d3-00d3-491e-8050-fe9f69126993","Type":"ContainerDied","Data":"afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf"} Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.523346 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-vjppp" event={"ID":"0f2c73d3-00d3-491e-8050-fe9f69126993","Type":"ContainerDied","Data":"c950827431aba5330225a2efc6400e9a9dc2e0758ad5009a5d81eaca15e49cea"} Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.523397 4970 scope.go:117] "RemoveContainer" containerID="afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.550435 4970 scope.go:117] "RemoveContainer" containerID="afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf" Sep 30 09:53:50 crc kubenswrapper[4970]: E0930 09:53:50.551068 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf\": container with ID starting with afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf not found: ID does not exist" containerID="afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.551124 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf"} err="failed to get container status \"afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf\": rpc error: code = NotFound desc = could not find container \"afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf\": container with ID starting with afa45004d16ef7561444dc5fbe35b746aea9ad5fc520ac5dfc4486cf25cb1bdf not found: ID does not exist" Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.571138 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vjppp"] Sep 30 09:53:50 crc kubenswrapper[4970]: I0930 09:53:50.577131 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-vjppp"] Sep 30 09:53:51 crc kubenswrapper[4970]: I0930 09:53:51.676493 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2c73d3-00d3-491e-8050-fe9f69126993" path="/var/lib/kubelet/pods/0f2c73d3-00d3-491e-8050-fe9f69126993/volumes" Sep 30 09:56:04 crc kubenswrapper[4970]: I0930 09:56:04.821694 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:56:04 crc kubenswrapper[4970]: I0930 09:56:04.822552 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.090572 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8zhcv"] Sep 30 09:56:21 crc kubenswrapper[4970]: E0930 09:56:21.091645 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2c73d3-00d3-491e-8050-fe9f69126993" containerName="registry" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.091660 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2c73d3-00d3-491e-8050-fe9f69126993" containerName="registry" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.091767 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2c73d3-00d3-491e-8050-fe9f69126993" containerName="registry" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.092499 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.096446 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.096541 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.096861 4970 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-xjfnj" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.107525 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8zhcv"] Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.112966 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-lzk5z"] Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.113834 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-lzk5z" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.118665 4970 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-jdjvf" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.129758 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8qdh"] Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.130886 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.134198 4970 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-r6829" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.147837 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-lzk5z"] Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.150550 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8qdh"] Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.162822 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls44m\" (UniqueName: \"kubernetes.io/projected/56eac2ba-1797-44ac-9f39-83f71a6f689d-kube-api-access-ls44m\") pod \"cert-manager-cainjector-7f985d654d-8zhcv\" (UID: \"56eac2ba-1797-44ac-9f39-83f71a6f689d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.162928 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nr7m\" (UniqueName: \"kubernetes.io/projected/5d792ad1-1442-40dc-a7d1-df5284e06e35-kube-api-access-6nr7m\") pod \"cert-manager-5b446d88c5-lzk5z\" (UID: \"5d792ad1-1442-40dc-a7d1-df5284e06e35\") " pod="cert-manager/cert-manager-5b446d88c5-lzk5z" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.162965 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjx24\" (UniqueName: \"kubernetes.io/projected/ee427339-b272-4768-bb9d-27fb3e8eab0e-kube-api-access-vjx24\") pod \"cert-manager-webhook-5655c58dd6-l8qdh\" (UID: \"ee427339-b272-4768-bb9d-27fb3e8eab0e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.264676 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nr7m\" (UniqueName: \"kubernetes.io/projected/5d792ad1-1442-40dc-a7d1-df5284e06e35-kube-api-access-6nr7m\") pod \"cert-manager-5b446d88c5-lzk5z\" (UID: \"5d792ad1-1442-40dc-a7d1-df5284e06e35\") " pod="cert-manager/cert-manager-5b446d88c5-lzk5z" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.264758 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjx24\" (UniqueName: \"kubernetes.io/projected/ee427339-b272-4768-bb9d-27fb3e8eab0e-kube-api-access-vjx24\") pod \"cert-manager-webhook-5655c58dd6-l8qdh\" (UID: \"ee427339-b272-4768-bb9d-27fb3e8eab0e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.264812 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls44m\" (UniqueName: \"kubernetes.io/projected/56eac2ba-1797-44ac-9f39-83f71a6f689d-kube-api-access-ls44m\") pod \"cert-manager-cainjector-7f985d654d-8zhcv\" (UID: \"56eac2ba-1797-44ac-9f39-83f71a6f689d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.285706 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nr7m\" (UniqueName: \"kubernetes.io/projected/5d792ad1-1442-40dc-a7d1-df5284e06e35-kube-api-access-6nr7m\") pod \"cert-manager-5b446d88c5-lzk5z\" (UID: \"5d792ad1-1442-40dc-a7d1-df5284e06e35\") " pod="cert-manager/cert-manager-5b446d88c5-lzk5z" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.285782 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjx24\" (UniqueName: \"kubernetes.io/projected/ee427339-b272-4768-bb9d-27fb3e8eab0e-kube-api-access-vjx24\") pod \"cert-manager-webhook-5655c58dd6-l8qdh\" (UID: \"ee427339-b272-4768-bb9d-27fb3e8eab0e\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.289901 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls44m\" (UniqueName: \"kubernetes.io/projected/56eac2ba-1797-44ac-9f39-83f71a6f689d-kube-api-access-ls44m\") pod \"cert-manager-cainjector-7f985d654d-8zhcv\" (UID: \"56eac2ba-1797-44ac-9f39-83f71a6f689d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.419942 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.430485 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-lzk5z" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.445053 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.748907 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-lzk5z"] Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.759639 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.780622 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-l8qdh"] Sep 30 09:56:21 crc kubenswrapper[4970]: W0930 09:56:21.783110 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee427339_b272_4768_bb9d_27fb3e8eab0e.slice/crio-a78f106a1dba6e457ecb5b7ae441b2c67613c491b41aa6f503e82bac4c208ad2 WatchSource:0}: Error finding container a78f106a1dba6e457ecb5b7ae441b2c67613c491b41aa6f503e82bac4c208ad2: Status 404 returned error can't find the container with id a78f106a1dba6e457ecb5b7ae441b2c67613c491b41aa6f503e82bac4c208ad2 Sep 30 09:56:21 crc kubenswrapper[4970]: I0930 09:56:21.892904 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-8zhcv"] Sep 30 09:56:21 crc kubenswrapper[4970]: W0930 09:56:21.900880 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56eac2ba_1797_44ac_9f39_83f71a6f689d.slice/crio-013ff8f30999212a86ffc72a3268b7c2f7ee1c30ebe51d6fd97f21019dea6ca5 WatchSource:0}: Error finding container 013ff8f30999212a86ffc72a3268b7c2f7ee1c30ebe51d6fd97f21019dea6ca5: Status 404 returned error can't find the container with id 013ff8f30999212a86ffc72a3268b7c2f7ee1c30ebe51d6fd97f21019dea6ca5 Sep 30 09:56:22 crc kubenswrapper[4970]: I0930 09:56:22.607568 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" event={"ID":"56eac2ba-1797-44ac-9f39-83f71a6f689d","Type":"ContainerStarted","Data":"013ff8f30999212a86ffc72a3268b7c2f7ee1c30ebe51d6fd97f21019dea6ca5"} Sep 30 09:56:22 crc kubenswrapper[4970]: I0930 09:56:22.610842 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" event={"ID":"ee427339-b272-4768-bb9d-27fb3e8eab0e","Type":"ContainerStarted","Data":"a78f106a1dba6e457ecb5b7ae441b2c67613c491b41aa6f503e82bac4c208ad2"} Sep 30 09:56:22 crc kubenswrapper[4970]: I0930 09:56:22.613367 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-lzk5z" event={"ID":"5d792ad1-1442-40dc-a7d1-df5284e06e35","Type":"ContainerStarted","Data":"585da69857caa30bf53bf2c3843bcf0b60d007b45c514b3e08e552a383c88727"} Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.640405 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" event={"ID":"56eac2ba-1797-44ac-9f39-83f71a6f689d","Type":"ContainerStarted","Data":"430bd4985efa3035104d09f02891f3f23e069da776d49900571baad46101f6e5"} Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.642444 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" event={"ID":"ee427339-b272-4768-bb9d-27fb3e8eab0e","Type":"ContainerStarted","Data":"4ea8d624a0afaa956a27bb0c79ed6763cc90eddeebdab7d5301d300207d3557d"} Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.642619 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.644444 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-lzk5z" event={"ID":"5d792ad1-1442-40dc-a7d1-df5284e06e35","Type":"ContainerStarted","Data":"30641a9b0bc30a04987c9b8f33fcf5030bb2d0e0a191a0f92d0839381cd0d68e"} Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.662621 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-8zhcv" podStartSLOduration=1.660347129 podStartE2EDuration="5.66258857s" podCreationTimestamp="2025-09-30 09:56:21 +0000 UTC" firstStartedPulling="2025-09-30 09:56:21.903337639 +0000 UTC m=+594.975188583" lastFinishedPulling="2025-09-30 09:56:25.90557905 +0000 UTC m=+598.977430024" observedRunningTime="2025-09-30 09:56:26.656097269 +0000 UTC m=+599.727948293" watchObservedRunningTime="2025-09-30 09:56:26.66258857 +0000 UTC m=+599.734439554" Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.683024 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" podStartSLOduration=1.5693204490000001 podStartE2EDuration="5.682978996s" podCreationTimestamp="2025-09-30 09:56:21 +0000 UTC" firstStartedPulling="2025-09-30 09:56:21.785613988 +0000 UTC m=+594.857464922" lastFinishedPulling="2025-09-30 09:56:25.899272535 +0000 UTC m=+598.971123469" observedRunningTime="2025-09-30 09:56:26.679820458 +0000 UTC m=+599.751671432" watchObservedRunningTime="2025-09-30 09:56:26.682978996 +0000 UTC m=+599.754829940" Sep 30 09:56:26 crc kubenswrapper[4970]: I0930 09:56:26.702396 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-lzk5z" podStartSLOduration=1.5862804700000002 podStartE2EDuration="5.702370265s" podCreationTimestamp="2025-09-30 09:56:21 +0000 UTC" firstStartedPulling="2025-09-30 09:56:21.759367238 +0000 UTC m=+594.831218162" lastFinishedPulling="2025-09-30 09:56:25.875457013 +0000 UTC m=+598.947307957" observedRunningTime="2025-09-30 09:56:26.699715361 +0000 UTC m=+599.771566315" watchObservedRunningTime="2025-09-30 09:56:26.702370265 +0000 UTC m=+599.774221209" Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.448621 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-l8qdh" Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.762620 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-frblw"] Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763307 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-controller" containerID="cri-o://9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763392 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="nbdb" containerID="cri-o://a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763670 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-acl-logging" containerID="cri-o://8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763639 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-node" containerID="cri-o://702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763720 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763684 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="sbdb" containerID="cri-o://ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.763744 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="northd" containerID="cri-o://7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" gracePeriod=30 Sep 30 09:56:31 crc kubenswrapper[4970]: I0930 09:56:31.814752 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" containerID="cri-o://4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" gracePeriod=30 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.157517 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/3.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.160867 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovn-acl-logging/0.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.161455 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovn-controller/0.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.162117 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212294 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-kzf6f"] Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212537 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212549 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212557 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212563 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212574 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212580 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212586 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="northd" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212592 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="northd" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212604 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-acl-logging" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212610 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-acl-logging" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212617 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="nbdb" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212622 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="nbdb" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212631 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212637 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212647 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kubecfg-setup" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212653 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kubecfg-setup" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212664 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-node" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212669 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-node" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212679 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212686 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212695 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="sbdb" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212701 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="sbdb" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212796 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="sbdb" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212808 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="northd" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212817 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212823 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212830 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="nbdb" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212839 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212846 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-ovn-metrics" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212855 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="kube-rbac-proxy-node" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212863 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-acl-logging" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212873 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovn-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212960 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.212966 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.212977 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.213002 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.213091 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.213262 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9687ea64-3693-468d-9fde-2059deb10338" containerName="ovnkube-controller" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.214776 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356780 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-slash\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356820 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-openvswitch\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356867 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-script-lib\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356892 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-kubelet\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356916 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-slash" (OuterVolumeSpecName: "host-slash") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356925 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qpww\" (UniqueName: \"kubernetes.io/projected/9687ea64-3693-468d-9fde-2059deb10338-kube-api-access-2qpww\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356934 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356970 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.356974 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-netns\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357029 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357039 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-env-overrides\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357071 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-ovn\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357104 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-systemd\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357138 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-var-lib-openvswitch\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357164 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-node-log\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357190 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-netd\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357239 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-bin\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357261 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-etc-openvswitch\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357288 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-log-socket\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357311 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357318 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-systemd-units\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357344 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357369 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357371 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-ovn-kubernetes\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357365 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-node-log" (OuterVolumeSpecName: "node-log") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357370 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357393 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357401 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-config\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357425 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-log-socket" (OuterVolumeSpecName: "log-socket") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357510 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9687ea64-3693-468d-9fde-2059deb10338-ovn-node-metrics-cert\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357424 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357459 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357465 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357590 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9687ea64-3693-468d-9fde-2059deb10338\" (UID: \"9687ea64-3693-468d-9fde-2059deb10338\") " Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357668 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357763 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357735 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357804 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-kubelet\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357840 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.357861 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-node-log\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358022 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-log-socket\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358076 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-systemd-units\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358099 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-systemd\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358180 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-ovnkube-config\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358219 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358328 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-cni-bin\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358415 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-ovn\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358478 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-cni-netd\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358506 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-etc-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358616 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-var-lib-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358646 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-ovnkube-script-lib\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358668 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwxnf\" (UniqueName: \"kubernetes.io/projected/29e75c57-582d-4f9b-a3b3-0044785e3595-kube-api-access-wwxnf\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358695 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-slash\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358718 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-env-overrides\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358748 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-run-ovn-kubernetes\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358768 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29e75c57-582d-4f9b-a3b3-0044785e3595-ovn-node-metrics-cert\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358812 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-run-netns\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358919 4970 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-log-socket\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358935 4970 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-systemd-units\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358952 4970 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358964 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.358979 4970 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359012 4970 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-slash\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359027 4970 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359039 4970 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359051 4970 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-kubelet\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359062 4970 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-run-netns\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359073 4970 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9687ea64-3693-468d-9fde-2059deb10338-env-overrides\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359082 4970 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359096 4970 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359109 4970 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-node-log\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359119 4970 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-netd\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359134 4970 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-host-cni-bin\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.359146 4970 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.362363 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9687ea64-3693-468d-9fde-2059deb10338-kube-api-access-2qpww" (OuterVolumeSpecName: "kube-api-access-2qpww") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "kube-api-access-2qpww". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.362434 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9687ea64-3693-468d-9fde-2059deb10338-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.369830 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9687ea64-3693-468d-9fde-2059deb10338" (UID: "9687ea64-3693-468d-9fde-2059deb10338"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460032 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-var-lib-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460116 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-ovnkube-script-lib\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460165 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwxnf\" (UniqueName: \"kubernetes.io/projected/29e75c57-582d-4f9b-a3b3-0044785e3595-kube-api-access-wwxnf\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460176 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-var-lib-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-slash\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460251 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-env-overrides\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460294 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-run-netns\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460337 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-run-ovn-kubernetes\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460372 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29e75c57-582d-4f9b-a3b3-0044785e3595-ovn-node-metrics-cert\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460403 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-kubelet\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460435 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-node-log\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460463 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-slash\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460514 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460466 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-run-ovn-kubernetes\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460561 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-run-netns\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460690 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-log-socket\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460738 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-systemd-units\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460762 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-kubelet\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460784 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-systemd\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460826 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-log-socket\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460874 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-node-log\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460891 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-ovnkube-config\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460923 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-systemd-units\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460938 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.460980 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-systemd\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461040 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-cni-bin\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461131 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-ovn\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-cni-netd\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461243 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-etc-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461395 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-ovnkube-script-lib\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461369 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9687ea64-3693-468d-9fde-2059deb10338-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461495 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-cni-bin\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461508 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qpww\" (UniqueName: \"kubernetes.io/projected/9687ea64-3693-468d-9fde-2059deb10338-kube-api-access-2qpww\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-env-overrides\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461566 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-ovn\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461531 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-run-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461534 4970 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9687ea64-3693-468d-9fde-2059deb10338-run-systemd\") on node \"crc\" DevicePath \"\"" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461581 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-etc-openvswitch\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.461624 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29e75c57-582d-4f9b-a3b3-0044785e3595-host-cni-netd\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.462190 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/29e75c57-582d-4f9b-a3b3-0044785e3595-ovnkube-config\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.470556 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/29e75c57-582d-4f9b-a3b3-0044785e3595-ovn-node-metrics-cert\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.487144 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwxnf\" (UniqueName: \"kubernetes.io/projected/29e75c57-582d-4f9b-a3b3-0044785e3595-kube-api-access-wwxnf\") pod \"ovnkube-node-kzf6f\" (UID: \"29e75c57-582d-4f9b-a3b3-0044785e3595\") " pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.529946 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:32 crc kubenswrapper[4970]: W0930 09:56:32.565101 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29e75c57_582d_4f9b_a3b3_0044785e3595.slice/crio-acb3bedff7408861fae735a3e0ad311388bacc5d821b69f3b5e05af098bef8c5 WatchSource:0}: Error finding container acb3bedff7408861fae735a3e0ad311388bacc5d821b69f3b5e05af098bef8c5: Status 404 returned error can't find the container with id acb3bedff7408861fae735a3e0ad311388bacc5d821b69f3b5e05af098bef8c5 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.684107 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/2.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.685282 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/1.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.685376 4970 generic.go:334] "Generic (PLEG): container finished" podID="adc4e528-ad76-4673-925a-f4f932e1ac51" containerID="f9a542523ea7a51e1787f3a5415c9f5e402b4853c40b253cc053f7454fa09492" exitCode=2 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.685493 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerDied","Data":"f9a542523ea7a51e1787f3a5415c9f5e402b4853c40b253cc053f7454fa09492"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.685554 4970 scope.go:117] "RemoveContainer" containerID="74c99f0b3f29473aff8708291fee8c4ab11a03b1c852dbeb54ff7f2cceac3824" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.686423 4970 scope.go:117] "RemoveContainer" containerID="f9a542523ea7a51e1787f3a5415c9f5e402b4853c40b253cc053f7454fa09492" Sep 30 09:56:32 crc kubenswrapper[4970]: E0930 09:56:32.686745 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wdlzl_openshift-multus(adc4e528-ad76-4673-925a-f4f932e1ac51)\"" pod="openshift-multus/multus-wdlzl" podUID="adc4e528-ad76-4673-925a-f4f932e1ac51" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.687689 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"acb3bedff7408861fae735a3e0ad311388bacc5d821b69f3b5e05af098bef8c5"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.693521 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovnkube-controller/3.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.698359 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovn-acl-logging/0.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.698854 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-frblw_9687ea64-3693-468d-9fde-2059deb10338/ovn-controller/0.log" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699320 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" exitCode=0 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699349 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" exitCode=0 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699356 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" exitCode=0 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699363 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" exitCode=0 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699372 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" exitCode=0 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699363 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699423 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699443 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699379 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" exitCode=0 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699462 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699480 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" exitCode=143 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699487 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699493 4970 generic.go:334] "Generic (PLEG): container finished" podID="9687ea64-3693-468d-9fde-2059deb10338" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" exitCode=143 Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699462 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699507 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699606 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699624 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699633 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699640 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699646 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699653 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699660 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699667 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699673 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699680 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699689 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699700 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699709 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699715 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699722 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699728 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699734 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699740 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699746 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699752 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699758 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699767 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699785 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699792 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699798 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699804 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699809 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699815 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699822 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699827 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699833 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699839 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699847 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-frblw" event={"ID":"9687ea64-3693-468d-9fde-2059deb10338","Type":"ContainerDied","Data":"b0647c809e6f211d5390100dba68c022765b168080a9a385c63462c8d822693b"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699857 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699865 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699871 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699878 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699884 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699890 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699899 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699905 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699912 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.699919 4970 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.727016 4970 scope.go:117] "RemoveContainer" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.794370 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.801416 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-frblw"] Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.806917 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-frblw"] Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.819641 4970 scope.go:117] "RemoveContainer" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.839608 4970 scope.go:117] "RemoveContainer" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.859578 4970 scope.go:117] "RemoveContainer" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.876956 4970 scope.go:117] "RemoveContainer" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.933386 4970 scope.go:117] "RemoveContainer" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.964769 4970 scope.go:117] "RemoveContainer" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" Sep 30 09:56:32 crc kubenswrapper[4970]: I0930 09:56:32.981612 4970 scope.go:117] "RemoveContainer" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.002039 4970 scope.go:117] "RemoveContainer" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.022219 4970 scope.go:117] "RemoveContainer" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.022981 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": container with ID starting with 4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813 not found: ID does not exist" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.023120 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} err="failed to get container status \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": rpc error: code = NotFound desc = could not find container \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": container with ID starting with 4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.023198 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.023849 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": container with ID starting with 595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99 not found: ID does not exist" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.024003 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} err="failed to get container status \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": rpc error: code = NotFound desc = could not find container \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": container with ID starting with 595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.024116 4970 scope.go:117] "RemoveContainer" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.024610 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": container with ID starting with ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04 not found: ID does not exist" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.024654 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} err="failed to get container status \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": rpc error: code = NotFound desc = could not find container \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": container with ID starting with ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.024686 4970 scope.go:117] "RemoveContainer" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.025039 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": container with ID starting with a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954 not found: ID does not exist" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.025159 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} err="failed to get container status \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": rpc error: code = NotFound desc = could not find container \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": container with ID starting with a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.025304 4970 scope.go:117] "RemoveContainer" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.025697 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": container with ID starting with 7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306 not found: ID does not exist" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.025795 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} err="failed to get container status \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": rpc error: code = NotFound desc = could not find container \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": container with ID starting with 7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.025889 4970 scope.go:117] "RemoveContainer" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.026278 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": container with ID starting with 45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff not found: ID does not exist" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.026481 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} err="failed to get container status \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": rpc error: code = NotFound desc = could not find container \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": container with ID starting with 45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.026574 4970 scope.go:117] "RemoveContainer" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.026912 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": container with ID starting with 702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d not found: ID does not exist" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.026953 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} err="failed to get container status \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": rpc error: code = NotFound desc = could not find container \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": container with ID starting with 702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.026975 4970 scope.go:117] "RemoveContainer" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.027303 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": container with ID starting with 8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29 not found: ID does not exist" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.027357 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} err="failed to get container status \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": rpc error: code = NotFound desc = could not find container \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": container with ID starting with 8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.027390 4970 scope.go:117] "RemoveContainer" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.027967 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": container with ID starting with 9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040 not found: ID does not exist" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.028089 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} err="failed to get container status \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": rpc error: code = NotFound desc = could not find container \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": container with ID starting with 9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.028190 4970 scope.go:117] "RemoveContainer" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" Sep 30 09:56:33 crc kubenswrapper[4970]: E0930 09:56:33.028547 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": container with ID starting with 8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c not found: ID does not exist" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.028580 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} err="failed to get container status \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": rpc error: code = NotFound desc = could not find container \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": container with ID starting with 8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.028600 4970 scope.go:117] "RemoveContainer" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.028904 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} err="failed to get container status \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": rpc error: code = NotFound desc = could not find container \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": container with ID starting with 4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.028926 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.029268 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} err="failed to get container status \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": rpc error: code = NotFound desc = could not find container \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": container with ID starting with 595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.029326 4970 scope.go:117] "RemoveContainer" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.029671 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} err="failed to get container status \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": rpc error: code = NotFound desc = could not find container \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": container with ID starting with ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.029794 4970 scope.go:117] "RemoveContainer" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.030141 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} err="failed to get container status \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": rpc error: code = NotFound desc = could not find container \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": container with ID starting with a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.030172 4970 scope.go:117] "RemoveContainer" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.030617 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} err="failed to get container status \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": rpc error: code = NotFound desc = could not find container \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": container with ID starting with 7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.030645 4970 scope.go:117] "RemoveContainer" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.030902 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} err="failed to get container status \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": rpc error: code = NotFound desc = could not find container \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": container with ID starting with 45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.031023 4970 scope.go:117] "RemoveContainer" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.031354 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} err="failed to get container status \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": rpc error: code = NotFound desc = could not find container \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": container with ID starting with 702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.031383 4970 scope.go:117] "RemoveContainer" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.031854 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} err="failed to get container status \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": rpc error: code = NotFound desc = could not find container \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": container with ID starting with 8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.031899 4970 scope.go:117] "RemoveContainer" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.032370 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} err="failed to get container status \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": rpc error: code = NotFound desc = could not find container \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": container with ID starting with 9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.032413 4970 scope.go:117] "RemoveContainer" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.032725 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} err="failed to get container status \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": rpc error: code = NotFound desc = could not find container \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": container with ID starting with 8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.032828 4970 scope.go:117] "RemoveContainer" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.033432 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} err="failed to get container status \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": rpc error: code = NotFound desc = could not find container \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": container with ID starting with 4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.033490 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.034100 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} err="failed to get container status \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": rpc error: code = NotFound desc = could not find container \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": container with ID starting with 595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.034157 4970 scope.go:117] "RemoveContainer" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.034694 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} err="failed to get container status \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": rpc error: code = NotFound desc = could not find container \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": container with ID starting with ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.034739 4970 scope.go:117] "RemoveContainer" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.035152 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} err="failed to get container status \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": rpc error: code = NotFound desc = could not find container \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": container with ID starting with a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.035189 4970 scope.go:117] "RemoveContainer" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.035695 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} err="failed to get container status \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": rpc error: code = NotFound desc = could not find container \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": container with ID starting with 7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.035731 4970 scope.go:117] "RemoveContainer" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.036152 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} err="failed to get container status \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": rpc error: code = NotFound desc = could not find container \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": container with ID starting with 45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.036187 4970 scope.go:117] "RemoveContainer" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.036488 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} err="failed to get container status \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": rpc error: code = NotFound desc = could not find container \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": container with ID starting with 702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.036532 4970 scope.go:117] "RemoveContainer" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.036887 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} err="failed to get container status \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": rpc error: code = NotFound desc = could not find container \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": container with ID starting with 8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.036939 4970 scope.go:117] "RemoveContainer" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.037452 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} err="failed to get container status \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": rpc error: code = NotFound desc = could not find container \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": container with ID starting with 9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.037484 4970 scope.go:117] "RemoveContainer" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.037751 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} err="failed to get container status \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": rpc error: code = NotFound desc = could not find container \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": container with ID starting with 8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.037795 4970 scope.go:117] "RemoveContainer" containerID="4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.038239 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813"} err="failed to get container status \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": rpc error: code = NotFound desc = could not find container \"4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813\": container with ID starting with 4ad238c4b86b7df52037107669b97409b974fe2d78bf8ad47e032b32f18af813 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.038291 4970 scope.go:117] "RemoveContainer" containerID="595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.038674 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99"} err="failed to get container status \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": rpc error: code = NotFound desc = could not find container \"595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99\": container with ID starting with 595c97589f57da539b87cd4fae0f97dfb909f356fa3baed4ab7c821276524b99 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.038930 4970 scope.go:117] "RemoveContainer" containerID="ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.039553 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04"} err="failed to get container status \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": rpc error: code = NotFound desc = could not find container \"ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04\": container with ID starting with ef91efe2a2bb213e859cb95b9217c5ba74e908e1b4750ee60cd63ed6cd1c3f04 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.039583 4970 scope.go:117] "RemoveContainer" containerID="a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.039851 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954"} err="failed to get container status \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": rpc error: code = NotFound desc = could not find container \"a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954\": container with ID starting with a99d2be17975c4e57fc98a60a0c8c7331269f16e722404923c72e383cbf07954 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.039893 4970 scope.go:117] "RemoveContainer" containerID="7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.040179 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306"} err="failed to get container status \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": rpc error: code = NotFound desc = could not find container \"7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306\": container with ID starting with 7a5f0e9e13c865a2dc39e90bfc6e98d4aa5072e61cb6d735c470f407fbfa3306 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.040211 4970 scope.go:117] "RemoveContainer" containerID="45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.040412 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff"} err="failed to get container status \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": rpc error: code = NotFound desc = could not find container \"45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff\": container with ID starting with 45f067f738c5585d17a75c2849be9e4d414fda2c5654f8f030b1007523900eff not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.040436 4970 scope.go:117] "RemoveContainer" containerID="702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.040830 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d"} err="failed to get container status \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": rpc error: code = NotFound desc = could not find container \"702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d\": container with ID starting with 702ed80c0aba2fafbfa13d7c2c0b9d3090a189e06e3d3aebbab917435a92c91d not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.040873 4970 scope.go:117] "RemoveContainer" containerID="8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.041187 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29"} err="failed to get container status \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": rpc error: code = NotFound desc = could not find container \"8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29\": container with ID starting with 8dc5523533299878135c16f41cb9f224627d6d9a7495d4a8b50ce145eae0de29 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.041220 4970 scope.go:117] "RemoveContainer" containerID="9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.041476 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040"} err="failed to get container status \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": rpc error: code = NotFound desc = could not find container \"9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040\": container with ID starting with 9989c5fe295e2ff27d5694c12d47c1482a67683ca3896bb186e57fbe1ccf0040 not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.041516 4970 scope.go:117] "RemoveContainer" containerID="8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.041770 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c"} err="failed to get container status \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": rpc error: code = NotFound desc = could not find container \"8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c\": container with ID starting with 8267c5b3be39221392036d7aaa11815fa1308a86000071881e7f7cb15c0b113c not found: ID does not exist" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.680107 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9687ea64-3693-468d-9fde-2059deb10338" path="/var/lib/kubelet/pods/9687ea64-3693-468d-9fde-2059deb10338/volumes" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.709734 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/2.log" Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.711885 4970 generic.go:334] "Generic (PLEG): container finished" podID="29e75c57-582d-4f9b-a3b3-0044785e3595" containerID="8e8b5ebda9a284c0c3e028e1d2d040e20e8ee5dfc754d33d6e78ba188bc9e2b6" exitCode=0 Sep 30 09:56:33 crc kubenswrapper[4970]: I0930 09:56:33.711934 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerDied","Data":"8e8b5ebda9a284c0c3e028e1d2d040e20e8ee5dfc754d33d6e78ba188bc9e2b6"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.725230 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"e08a38bccd17c6acf1038bef961cf07025d5043a003cf04c3878d5bad7b785df"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.727549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"5c23593e53f62edd22e1682a1d51170d9475c6c6075eee70eae2ad4ad33de881"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.727628 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"b408b3a72f794618aebf81f5508ef8317d50376c823328223157f9ad77fa79c3"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.727687 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"30f55625387dd2e61a817a453e4f139574f1992d236cf2cff5b808dbeaeb8d60"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.727746 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"7ce3f38fd31a25232794fd8c23d176f0e42dc216c10b55ad18776d35c19ffded"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.727800 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"6cfbc201106410d2bd4c7989962b89e108f431bf51b006d3fbdd0af992c25d82"} Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.821368 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:56:34 crc kubenswrapper[4970]: I0930 09:56:34.821457 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:56:37 crc kubenswrapper[4970]: I0930 09:56:37.757937 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"f70070ab1919c83f6dcee354b3ad3770043048cb7f34db54e7e26e9ba815285e"} Sep 30 09:56:39 crc kubenswrapper[4970]: I0930 09:56:39.779234 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" event={"ID":"29e75c57-582d-4f9b-a3b3-0044785e3595","Type":"ContainerStarted","Data":"6eab6aff39c7c7e6a353ab9da4e7dc9b8c92f3624936bb33eee9dbc1f485b154"} Sep 30 09:56:39 crc kubenswrapper[4970]: I0930 09:56:39.780265 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:39 crc kubenswrapper[4970]: I0930 09:56:39.816675 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:39 crc kubenswrapper[4970]: I0930 09:56:39.822231 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" podStartSLOduration=7.822215713 podStartE2EDuration="7.822215713s" podCreationTimestamp="2025-09-30 09:56:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:56:39.816228117 +0000 UTC m=+612.888079061" watchObservedRunningTime="2025-09-30 09:56:39.822215713 +0000 UTC m=+612.894066657" Sep 30 09:56:40 crc kubenswrapper[4970]: I0930 09:56:40.789330 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:40 crc kubenswrapper[4970]: I0930 09:56:40.789438 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:40 crc kubenswrapper[4970]: I0930 09:56:40.874161 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:56:47 crc kubenswrapper[4970]: I0930 09:56:47.671885 4970 scope.go:117] "RemoveContainer" containerID="f9a542523ea7a51e1787f3a5415c9f5e402b4853c40b253cc053f7454fa09492" Sep 30 09:56:47 crc kubenswrapper[4970]: E0930 09:56:47.672910 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-wdlzl_openshift-multus(adc4e528-ad76-4673-925a-f4f932e1ac51)\"" pod="openshift-multus/multus-wdlzl" podUID="adc4e528-ad76-4673-925a-f4f932e1ac51" Sep 30 09:57:01 crc kubenswrapper[4970]: I0930 09:57:01.668813 4970 scope.go:117] "RemoveContainer" containerID="f9a542523ea7a51e1787f3a5415c9f5e402b4853c40b253cc053f7454fa09492" Sep 30 09:57:01 crc kubenswrapper[4970]: I0930 09:57:01.941399 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wdlzl_adc4e528-ad76-4673-925a-f4f932e1ac51/kube-multus/2.log" Sep 30 09:57:01 crc kubenswrapper[4970]: I0930 09:57:01.941470 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wdlzl" event={"ID":"adc4e528-ad76-4673-925a-f4f932e1ac51","Type":"ContainerStarted","Data":"491aad15da833e299a05206e7b90eb5e77f7b951d447ae5c752770335495b26b"} Sep 30 09:57:02 crc kubenswrapper[4970]: I0930 09:57:02.568855 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-kzf6f" Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.821695 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.822103 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.822184 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.823234 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc4842ae8bfd4b8ab89c01d01940a6e7946834cd5975d69ed84b67c89bbc8813"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.823334 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://dc4842ae8bfd4b8ab89c01d01940a6e7946834cd5975d69ed84b67c89bbc8813" gracePeriod=600 Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.968921 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="dc4842ae8bfd4b8ab89c01d01940a6e7946834cd5975d69ed84b67c89bbc8813" exitCode=0 Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.968981 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"dc4842ae8bfd4b8ab89c01d01940a6e7946834cd5975d69ed84b67c89bbc8813"} Sep 30 09:57:04 crc kubenswrapper[4970]: I0930 09:57:04.969052 4970 scope.go:117] "RemoveContainer" containerID="3256d43a82b3895388a142b25e718f337799e9f30680508a3d1264e9ba385ed1" Sep 30 09:57:05 crc kubenswrapper[4970]: I0930 09:57:05.978348 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"00bf1152552c1c478a9068852104f804953a83e3c45cb022e488704fa11cacf9"} Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.736879 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd"] Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.738614 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: W0930 09:57:10.740716 4970 reflector.go:561] object-"openshift-marketplace"/"default-dockercfg-vmwhc": failed to list *v1.Secret: secrets "default-dockercfg-vmwhc" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-marketplace": no relationship found between node 'crc' and this object Sep 30 09:57:10 crc kubenswrapper[4970]: E0930 09:57:10.740779 4970 reflector.go:158] "Unhandled Error" err="object-\"openshift-marketplace\"/\"default-dockercfg-vmwhc\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-vmwhc\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-marketplace\": no relationship found between node 'crc' and this object" logger="UnhandledError" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.751568 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd"] Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.863638 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.863707 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngw97\" (UniqueName: \"kubernetes.io/projected/389409c8-24ce-486a-b03a-8b8770ddedfb-kube-api-access-ngw97\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.863986 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.966139 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.966186 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngw97\" (UniqueName: \"kubernetes.io/projected/389409c8-24ce-486a-b03a-8b8770ddedfb-kube-api-access-ngw97\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.966220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.966662 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.966688 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:10 crc kubenswrapper[4970]: I0930 09:57:10.989827 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngw97\" (UniqueName: \"kubernetes.io/projected/389409c8-24ce-486a-b03a-8b8770ddedfb-kube-api-access-ngw97\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:11 crc kubenswrapper[4970]: I0930 09:57:11.941706 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 30 09:57:11 crc kubenswrapper[4970]: I0930 09:57:11.948953 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:12 crc kubenswrapper[4970]: I0930 09:57:12.190569 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd"] Sep 30 09:57:12 crc kubenswrapper[4970]: W0930 09:57:12.198784 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod389409c8_24ce_486a_b03a_8b8770ddedfb.slice/crio-c7a34b684e3a40b86884cd466abefd251e2ef9f4f1a7a9137f31176e32953bcb WatchSource:0}: Error finding container c7a34b684e3a40b86884cd466abefd251e2ef9f4f1a7a9137f31176e32953bcb: Status 404 returned error can't find the container with id c7a34b684e3a40b86884cd466abefd251e2ef9f4f1a7a9137f31176e32953bcb Sep 30 09:57:13 crc kubenswrapper[4970]: I0930 09:57:13.024778 4970 generic.go:334] "Generic (PLEG): container finished" podID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerID="fd9506f8af469721e1cdbe7f8677115d694bad4c293288ae21d7110d287cd9a1" exitCode=0 Sep 30 09:57:13 crc kubenswrapper[4970]: I0930 09:57:13.024922 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" event={"ID":"389409c8-24ce-486a-b03a-8b8770ddedfb","Type":"ContainerDied","Data":"fd9506f8af469721e1cdbe7f8677115d694bad4c293288ae21d7110d287cd9a1"} Sep 30 09:57:13 crc kubenswrapper[4970]: I0930 09:57:13.025312 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" event={"ID":"389409c8-24ce-486a-b03a-8b8770ddedfb","Type":"ContainerStarted","Data":"c7a34b684e3a40b86884cd466abefd251e2ef9f4f1a7a9137f31176e32953bcb"} Sep 30 09:57:15 crc kubenswrapper[4970]: I0930 09:57:15.041522 4970 generic.go:334] "Generic (PLEG): container finished" podID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerID="3762b4cbd41af3ae33e504d85b643c8933c5c60669c053e7069a47c00b21c0fd" exitCode=0 Sep 30 09:57:15 crc kubenswrapper[4970]: I0930 09:57:15.041668 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" event={"ID":"389409c8-24ce-486a-b03a-8b8770ddedfb","Type":"ContainerDied","Data":"3762b4cbd41af3ae33e504d85b643c8933c5c60669c053e7069a47c00b21c0fd"} Sep 30 09:57:16 crc kubenswrapper[4970]: I0930 09:57:16.051124 4970 generic.go:334] "Generic (PLEG): container finished" podID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerID="fee27c17ffe7cd204d61d508f32e04f6c9639b04f4d026f8adb656db40ffd0b1" exitCode=0 Sep 30 09:57:16 crc kubenswrapper[4970]: I0930 09:57:16.051206 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" event={"ID":"389409c8-24ce-486a-b03a-8b8770ddedfb","Type":"ContainerDied","Data":"fee27c17ffe7cd204d61d508f32e04f6c9639b04f4d026f8adb656db40ffd0b1"} Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.367062 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.465431 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngw97\" (UniqueName: \"kubernetes.io/projected/389409c8-24ce-486a-b03a-8b8770ddedfb-kube-api-access-ngw97\") pod \"389409c8-24ce-486a-b03a-8b8770ddedfb\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.476360 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389409c8-24ce-486a-b03a-8b8770ddedfb-kube-api-access-ngw97" (OuterVolumeSpecName: "kube-api-access-ngw97") pod "389409c8-24ce-486a-b03a-8b8770ddedfb" (UID: "389409c8-24ce-486a-b03a-8b8770ddedfb"). InnerVolumeSpecName "kube-api-access-ngw97". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.566773 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-bundle\") pod \"389409c8-24ce-486a-b03a-8b8770ddedfb\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.566838 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-util\") pod \"389409c8-24ce-486a-b03a-8b8770ddedfb\" (UID: \"389409c8-24ce-486a-b03a-8b8770ddedfb\") " Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.567313 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngw97\" (UniqueName: \"kubernetes.io/projected/389409c8-24ce-486a-b03a-8b8770ddedfb-kube-api-access-ngw97\") on node \"crc\" DevicePath \"\"" Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.568592 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-bundle" (OuterVolumeSpecName: "bundle") pod "389409c8-24ce-486a-b03a-8b8770ddedfb" (UID: "389409c8-24ce-486a-b03a-8b8770ddedfb"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.669449 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.897071 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-util" (OuterVolumeSpecName: "util") pod "389409c8-24ce-486a-b03a-8b8770ddedfb" (UID: "389409c8-24ce-486a-b03a-8b8770ddedfb"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:57:17 crc kubenswrapper[4970]: I0930 09:57:17.975066 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/389409c8-24ce-486a-b03a-8b8770ddedfb-util\") on node \"crc\" DevicePath \"\"" Sep 30 09:57:18 crc kubenswrapper[4970]: I0930 09:57:18.083846 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" event={"ID":"389409c8-24ce-486a-b03a-8b8770ddedfb","Type":"ContainerDied","Data":"c7a34b684e3a40b86884cd466abefd251e2ef9f4f1a7a9137f31176e32953bcb"} Sep 30 09:57:18 crc kubenswrapper[4970]: I0930 09:57:18.083909 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7a34b684e3a40b86884cd466abefd251e2ef9f4f1a7a9137f31176e32953bcb" Sep 30 09:57:18 crc kubenswrapper[4970]: I0930 09:57:18.084036 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.289112 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt"] Sep 30 09:57:22 crc kubenswrapper[4970]: E0930 09:57:22.289665 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="extract" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.289679 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="extract" Sep 30 09:57:22 crc kubenswrapper[4970]: E0930 09:57:22.289700 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="util" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.289706 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="util" Sep 30 09:57:22 crc kubenswrapper[4970]: E0930 09:57:22.289720 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="pull" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.289727 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="pull" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.289839 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="389409c8-24ce-486a-b03a-8b8770ddedfb" containerName="extract" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.290307 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.292911 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.293059 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.298251 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-n2k89" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.302057 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt"] Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.343817 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8kjn\" (UniqueName: \"kubernetes.io/projected/f65ea665-ce1c-4197-ae02-5810c62f1355-kube-api-access-j8kjn\") pod \"nmstate-operator-5d6f6cfd66-lfwqt\" (UID: \"f65ea665-ce1c-4197-ae02-5810c62f1355\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.445783 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8kjn\" (UniqueName: \"kubernetes.io/projected/f65ea665-ce1c-4197-ae02-5810c62f1355-kube-api-access-j8kjn\") pod \"nmstate-operator-5d6f6cfd66-lfwqt\" (UID: \"f65ea665-ce1c-4197-ae02-5810c62f1355\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.466935 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8kjn\" (UniqueName: \"kubernetes.io/projected/f65ea665-ce1c-4197-ae02-5810c62f1355-kube-api-access-j8kjn\") pod \"nmstate-operator-5d6f6cfd66-lfwqt\" (UID: \"f65ea665-ce1c-4197-ae02-5810c62f1355\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.606681 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" Sep 30 09:57:22 crc kubenswrapper[4970]: I0930 09:57:22.879836 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt"] Sep 30 09:57:23 crc kubenswrapper[4970]: I0930 09:57:23.115573 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" event={"ID":"f65ea665-ce1c-4197-ae02-5810c62f1355","Type":"ContainerStarted","Data":"03ec6260737ba5df31685860f8f7e6e2c7e3b714baba010457a646107ef43a69"} Sep 30 09:57:26 crc kubenswrapper[4970]: I0930 09:57:26.136133 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" event={"ID":"f65ea665-ce1c-4197-ae02-5810c62f1355","Type":"ContainerStarted","Data":"06ebc12562f46000db509216a37f5fd535b7b29ffc40edfa42d55460ea91d58a"} Sep 30 09:57:26 crc kubenswrapper[4970]: I0930 09:57:26.156107 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-lfwqt" podStartSLOduration=1.474935008 podStartE2EDuration="4.156080503s" podCreationTimestamp="2025-09-30 09:57:22 +0000 UTC" firstStartedPulling="2025-09-30 09:57:22.891754584 +0000 UTC m=+655.963605528" lastFinishedPulling="2025-09-30 09:57:25.572900069 +0000 UTC m=+658.644751023" observedRunningTime="2025-09-30 09:57:26.153524782 +0000 UTC m=+659.225375726" watchObservedRunningTime="2025-09-30 09:57:26.156080503 +0000 UTC m=+659.227931447" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.184528 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.187322 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.189946 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.190792 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.192738 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.193779 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hhc2v" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.203826 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.209536 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.230716 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-92md2"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.232133 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.305585 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlbpr\" (UniqueName: \"kubernetes.io/projected/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-kube-api-access-jlbpr\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.305900 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k69sj\" (UniqueName: \"kubernetes.io/projected/42b7f1da-5493-4471-980a-a87efdd8eda2-kube-api-access-k69sj\") pod \"nmstate-metrics-58fcddf996-xtzzj\" (UID: \"42b7f1da-5493-4471-980a-a87efdd8eda2\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.306090 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.335399 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.336628 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.338953 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.339257 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.339398 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-8pvfx" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.351008 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.407260 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k69sj\" (UniqueName: \"kubernetes.io/projected/42b7f1da-5493-4471-980a-a87efdd8eda2-kube-api-access-k69sj\") pod \"nmstate-metrics-58fcddf996-xtzzj\" (UID: \"42b7f1da-5493-4471-980a-a87efdd8eda2\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.407318 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-ovs-socket\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.407350 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-nmstate-lock\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.407766 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: E0930 09:57:31.407925 4970 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.407939 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-dbus-socket\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: E0930 09:57:31.408046 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-tls-key-pair podName:15cbe10c-fb64-4630-bd5b-fd50c2c07d64 nodeName:}" failed. No retries permitted until 2025-09-30 09:57:31.907971139 +0000 UTC m=+664.979822073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-tls-key-pair") pod "nmstate-webhook-6d689559c5-ztvnw" (UID: "15cbe10c-fb64-4630-bd5b-fd50c2c07d64") : secret "openshift-nmstate-webhook" not found Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.408320 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz8hx\" (UniqueName: \"kubernetes.io/projected/cf79d047-21bc-461c-a5c7-7c12104fbf35-kube-api-access-hz8hx\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.408512 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlbpr\" (UniqueName: \"kubernetes.io/projected/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-kube-api-access-jlbpr\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.426389 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k69sj\" (UniqueName: \"kubernetes.io/projected/42b7f1da-5493-4471-980a-a87efdd8eda2-kube-api-access-k69sj\") pod \"nmstate-metrics-58fcddf996-xtzzj\" (UID: \"42b7f1da-5493-4471-980a-a87efdd8eda2\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.432669 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlbpr\" (UniqueName: \"kubernetes.io/projected/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-kube-api-access-jlbpr\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.508372 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.509868 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr5kb\" (UniqueName: \"kubernetes.io/projected/1b05f65e-1145-40c4-a5cb-e07766072045-kube-api-access-qr5kb\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.509947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-ovs-socket\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510010 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-nmstate-lock\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510071 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-nmstate-lock\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510120 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-ovs-socket\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510156 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-dbus-socket\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510266 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b05f65e-1145-40c4-a5cb-e07766072045-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510330 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b05f65e-1145-40c4-a5cb-e07766072045-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510384 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hz8hx\" (UniqueName: \"kubernetes.io/projected/cf79d047-21bc-461c-a5c7-7c12104fbf35-kube-api-access-hz8hx\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.510421 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/cf79d047-21bc-461c-a5c7-7c12104fbf35-dbus-socket\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.538876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hz8hx\" (UniqueName: \"kubernetes.io/projected/cf79d047-21bc-461c-a5c7-7c12104fbf35-kube-api-access-hz8hx\") pod \"nmstate-handler-92md2\" (UID: \"cf79d047-21bc-461c-a5c7-7c12104fbf35\") " pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.547100 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-558dcf77f6-vlqc5"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.548238 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.559355 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.580408 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-558dcf77f6-vlqc5"] Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.612689 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b05f65e-1145-40c4-a5cb-e07766072045-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.612743 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b05f65e-1145-40c4-a5cb-e07766072045-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.612782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qr5kb\" (UniqueName: \"kubernetes.io/projected/1b05f65e-1145-40c4-a5cb-e07766072045-kube-api-access-qr5kb\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: E0930 09:57:31.613083 4970 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Sep 30 09:57:31 crc kubenswrapper[4970]: E0930 09:57:31.613200 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1b05f65e-1145-40c4-a5cb-e07766072045-plugin-serving-cert podName:1b05f65e-1145-40c4-a5cb-e07766072045 nodeName:}" failed. No retries permitted until 2025-09-30 09:57:32.113179167 +0000 UTC m=+665.185030271 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/1b05f65e-1145-40c4-a5cb-e07766072045-plugin-serving-cert") pod "nmstate-console-plugin-864bb6dfb5-z745b" (UID: "1b05f65e-1145-40c4-a5cb-e07766072045") : secret "plugin-serving-cert" not found Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.613967 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/1b05f65e-1145-40c4-a5cb-e07766072045-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.636059 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qr5kb\" (UniqueName: \"kubernetes.io/projected/1b05f65e-1145-40c4-a5cb-e07766072045-kube-api-access-qr5kb\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.714542 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-oauth-config\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.714939 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-oauth-serving-cert\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.714970 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-config\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.715011 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r25f\" (UniqueName: \"kubernetes.io/projected/0ac6a063-577e-4dae-ae0c-612c38898ae7-kube-api-access-4r25f\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.715027 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-service-ca\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.715053 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-serving-cert\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.715071 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-trusted-ca-bundle\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.742807 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj"] Sep 30 09:57:31 crc kubenswrapper[4970]: W0930 09:57:31.754330 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42b7f1da_5493_4471_980a_a87efdd8eda2.slice/crio-5f4cdd86ce94c9e053ae0555fe8db085143ff417cc8e797b94502f678abddd27 WatchSource:0}: Error finding container 5f4cdd86ce94c9e053ae0555fe8db085143ff417cc8e797b94502f678abddd27: Status 404 returned error can't find the container with id 5f4cdd86ce94c9e053ae0555fe8db085143ff417cc8e797b94502f678abddd27 Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.816794 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-oauth-config\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.816840 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-oauth-serving-cert\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.816923 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-config\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.816959 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r25f\" (UniqueName: \"kubernetes.io/projected/0ac6a063-577e-4dae-ae0c-612c38898ae7-kube-api-access-4r25f\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.817002 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-service-ca\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.817057 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-serving-cert\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.817078 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-trusted-ca-bundle\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.819101 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-service-ca\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.819670 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-trusted-ca-bundle\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.819918 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-oauth-serving-cert\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.820240 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-config\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.820701 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-oauth-config\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.822738 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0ac6a063-577e-4dae-ae0c-612c38898ae7-console-serving-cert\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.835260 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r25f\" (UniqueName: \"kubernetes.io/projected/0ac6a063-577e-4dae-ae0c-612c38898ae7-kube-api-access-4r25f\") pod \"console-558dcf77f6-vlqc5\" (UID: \"0ac6a063-577e-4dae-ae0c-612c38898ae7\") " pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.915381 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.917937 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:31 crc kubenswrapper[4970]: I0930 09:57:31.922142 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/15cbe10c-fb64-4630-bd5b-fd50c2c07d64-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-ztvnw\" (UID: \"15cbe10c-fb64-4630-bd5b-fd50c2c07d64\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.119403 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.120324 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b05f65e-1145-40c4-a5cb-e07766072045-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.125244 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/1b05f65e-1145-40c4-a5cb-e07766072045-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-z745b\" (UID: \"1b05f65e-1145-40c4-a5cb-e07766072045\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.129299 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-558dcf77f6-vlqc5"] Sep 30 09:57:32 crc kubenswrapper[4970]: W0930 09:57:32.135292 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ac6a063_577e_4dae_ae0c_612c38898ae7.slice/crio-6efb831c23fc35fee28019e0ef7feb29d0dfc12657ea622b0aa2c34ec06a158c WatchSource:0}: Error finding container 6efb831c23fc35fee28019e0ef7feb29d0dfc12657ea622b0aa2c34ec06a158c: Status 404 returned error can't find the container with id 6efb831c23fc35fee28019e0ef7feb29d0dfc12657ea622b0aa2c34ec06a158c Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.179960 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-558dcf77f6-vlqc5" event={"ID":"0ac6a063-577e-4dae-ae0c-612c38898ae7","Type":"ContainerStarted","Data":"6efb831c23fc35fee28019e0ef7feb29d0dfc12657ea622b0aa2c34ec06a158c"} Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.181369 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-92md2" event={"ID":"cf79d047-21bc-461c-a5c7-7c12104fbf35","Type":"ContainerStarted","Data":"a24b41853c04666010f7088b47174ddd9749251cd46156c202908746eb9bb1d9"} Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.182553 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" event={"ID":"42b7f1da-5493-4471-980a-a87efdd8eda2","Type":"ContainerStarted","Data":"5f4cdd86ce94c9e053ae0555fe8db085143ff417cc8e797b94502f678abddd27"} Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.250832 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.313497 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw"] Sep 30 09:57:32 crc kubenswrapper[4970]: W0930 09:57:32.322605 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15cbe10c_fb64_4630_bd5b_fd50c2c07d64.slice/crio-d6bafd3f547c334ce3074c39ad27148a285525e7cbadd3f94890075826108190 WatchSource:0}: Error finding container d6bafd3f547c334ce3074c39ad27148a285525e7cbadd3f94890075826108190: Status 404 returned error can't find the container with id d6bafd3f547c334ce3074c39ad27148a285525e7cbadd3f94890075826108190 Sep 30 09:57:32 crc kubenswrapper[4970]: I0930 09:57:32.658908 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b"] Sep 30 09:57:32 crc kubenswrapper[4970]: W0930 09:57:32.664942 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1b05f65e_1145_40c4_a5cb_e07766072045.slice/crio-ba99de78761212404c9cbd8c6ec8edc9be86aa5fcde2aba4ada1ef424f99b211 WatchSource:0}: Error finding container ba99de78761212404c9cbd8c6ec8edc9be86aa5fcde2aba4ada1ef424f99b211: Status 404 returned error can't find the container with id ba99de78761212404c9cbd8c6ec8edc9be86aa5fcde2aba4ada1ef424f99b211 Sep 30 09:57:33 crc kubenswrapper[4970]: I0930 09:57:33.190596 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" event={"ID":"1b05f65e-1145-40c4-a5cb-e07766072045","Type":"ContainerStarted","Data":"ba99de78761212404c9cbd8c6ec8edc9be86aa5fcde2aba4ada1ef424f99b211"} Sep 30 09:57:33 crc kubenswrapper[4970]: I0930 09:57:33.191659 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" event={"ID":"15cbe10c-fb64-4630-bd5b-fd50c2c07d64","Type":"ContainerStarted","Data":"d6bafd3f547c334ce3074c39ad27148a285525e7cbadd3f94890075826108190"} Sep 30 09:57:33 crc kubenswrapper[4970]: I0930 09:57:33.193178 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-558dcf77f6-vlqc5" event={"ID":"0ac6a063-577e-4dae-ae0c-612c38898ae7","Type":"ContainerStarted","Data":"ed783fbe82cd25dc44d310e0eee4d2a05700b29b2172c728a7a9e87b998b3cc9"} Sep 30 09:57:33 crc kubenswrapper[4970]: I0930 09:57:33.235941 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-558dcf77f6-vlqc5" podStartSLOduration=2.235914275 podStartE2EDuration="2.235914275s" podCreationTimestamp="2025-09-30 09:57:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:57:33.223762026 +0000 UTC m=+666.295612980" watchObservedRunningTime="2025-09-30 09:57:33.235914275 +0000 UTC m=+666.307765209" Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.209090 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" event={"ID":"42b7f1da-5493-4471-980a-a87efdd8eda2","Type":"ContainerStarted","Data":"c31ec53eef2ab71034653f8ab19540249a247254204df32d16cc286cb5fab9ab"} Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.212032 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" event={"ID":"15cbe10c-fb64-4630-bd5b-fd50c2c07d64","Type":"ContainerStarted","Data":"74ad1f4cee372e6eebbf8e4ea9f9525f2d78166cd188e2d93c3d54b2ec649e87"} Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.212216 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.213842 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-92md2" event={"ID":"cf79d047-21bc-461c-a5c7-7c12104fbf35","Type":"ContainerStarted","Data":"53f3d5f9d797d3dc1697287590ac275caa64180a47d6db60926025eedb9c4dbc"} Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.214090 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.240349 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" podStartSLOduration=2.510881758 podStartE2EDuration="4.24032425s" podCreationTimestamp="2025-09-30 09:57:31 +0000 UTC" firstStartedPulling="2025-09-30 09:57:32.324794961 +0000 UTC m=+665.396645895" lastFinishedPulling="2025-09-30 09:57:34.054237443 +0000 UTC m=+667.126088387" observedRunningTime="2025-09-30 09:57:35.236258616 +0000 UTC m=+668.308109700" watchObservedRunningTime="2025-09-30 09:57:35.24032425 +0000 UTC m=+668.312175204" Sep 30 09:57:35 crc kubenswrapper[4970]: I0930 09:57:35.300458 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-92md2" podStartSLOduration=1.861075927 podStartE2EDuration="4.300429905s" podCreationTimestamp="2025-09-30 09:57:31 +0000 UTC" firstStartedPulling="2025-09-30 09:57:31.616426048 +0000 UTC m=+664.688276982" lastFinishedPulling="2025-09-30 09:57:34.055780026 +0000 UTC m=+667.127630960" observedRunningTime="2025-09-30 09:57:35.295146578 +0000 UTC m=+668.366997512" watchObservedRunningTime="2025-09-30 09:57:35.300429905 +0000 UTC m=+668.372280829" Sep 30 09:57:36 crc kubenswrapper[4970]: I0930 09:57:36.220201 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" event={"ID":"1b05f65e-1145-40c4-a5cb-e07766072045","Type":"ContainerStarted","Data":"5092e75bfb83507f364278611fa59b29230187f300c76e4cb58cb1785084d1b1"} Sep 30 09:57:36 crc kubenswrapper[4970]: I0930 09:57:36.238704 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-z745b" podStartSLOduration=2.846829062 podStartE2EDuration="5.238663885s" podCreationTimestamp="2025-09-30 09:57:31 +0000 UTC" firstStartedPulling="2025-09-30 09:57:32.66776845 +0000 UTC m=+665.739619394" lastFinishedPulling="2025-09-30 09:57:35.059603253 +0000 UTC m=+668.131454217" observedRunningTime="2025-09-30 09:57:36.234001615 +0000 UTC m=+669.305852559" watchObservedRunningTime="2025-09-30 09:57:36.238663885 +0000 UTC m=+669.310514819" Sep 30 09:57:37 crc kubenswrapper[4970]: I0930 09:57:37.230517 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" event={"ID":"42b7f1da-5493-4471-980a-a87efdd8eda2","Type":"ContainerStarted","Data":"2859dadf8d41cb7e2c1e2394287928f9983eafd67b0e4fc30424fec0d92e1f34"} Sep 30 09:57:37 crc kubenswrapper[4970]: I0930 09:57:37.263880 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58fcddf996-xtzzj" podStartSLOduration=1.7223499420000001 podStartE2EDuration="6.263806267s" podCreationTimestamp="2025-09-30 09:57:31 +0000 UTC" firstStartedPulling="2025-09-30 09:57:31.757211542 +0000 UTC m=+664.829062476" lastFinishedPulling="2025-09-30 09:57:36.298667867 +0000 UTC m=+669.370518801" observedRunningTime="2025-09-30 09:57:37.255459264 +0000 UTC m=+670.327310248" watchObservedRunningTime="2025-09-30 09:57:37.263806267 +0000 UTC m=+670.335657241" Sep 30 09:57:41 crc kubenswrapper[4970]: I0930 09:57:41.588084 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-92md2" Sep 30 09:57:41 crc kubenswrapper[4970]: I0930 09:57:41.915582 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:41 crc kubenswrapper[4970]: I0930 09:57:41.915983 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:41 crc kubenswrapper[4970]: I0930 09:57:41.923382 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:42 crc kubenswrapper[4970]: I0930 09:57:42.271463 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-558dcf77f6-vlqc5" Sep 30 09:57:42 crc kubenswrapper[4970]: I0930 09:57:42.341551 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c8bs2"] Sep 30 09:57:52 crc kubenswrapper[4970]: I0930 09:57:52.127817 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6d689559c5-ztvnw" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.363541 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9"] Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.366939 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.369320 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.378927 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9"] Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.392168 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.392235 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.392292 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qpt\" (UniqueName: \"kubernetes.io/projected/1f33dea7-5310-40ed-9afc-243a4353a42b-kube-api-access-56qpt\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.395960 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-c8bs2" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" containerName="console" containerID="cri-o://b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35" gracePeriod=15 Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.494683 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.494778 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.494883 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56qpt\" (UniqueName: \"kubernetes.io/projected/1f33dea7-5310-40ed-9afc-243a4353a42b-kube-api-access-56qpt\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.496036 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.496281 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.520026 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56qpt\" (UniqueName: \"kubernetes.io/projected/1f33dea7-5310-40ed-9afc-243a4353a42b-kube-api-access-56qpt\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.692279 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.814751 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c8bs2_4eac6509-7889-4976-bcc4-bf65486c098f/console/0.log" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.814830 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900342 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-oauth-serving-cert\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900479 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-trusted-ca-bundle\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900518 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlvz5\" (UniqueName: \"kubernetes.io/projected/4eac6509-7889-4976-bcc4-bf65486c098f-kube-api-access-vlvz5\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-service-ca\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900661 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-console-config\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900725 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-oauth-config\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.900795 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-serving-cert\") pod \"4eac6509-7889-4976-bcc4-bf65486c098f\" (UID: \"4eac6509-7889-4976-bcc4-bf65486c098f\") " Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.901270 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.901277 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.901650 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-service-ca" (OuterVolumeSpecName: "service-ca") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.901860 4970 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-service-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.901887 4970 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.901901 4970 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.902554 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-console-config" (OuterVolumeSpecName: "console-config") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.906715 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.907402 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eac6509-7889-4976-bcc4-bf65486c098f-kube-api-access-vlvz5" (OuterVolumeSpecName: "kube-api-access-vlvz5") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "kube-api-access-vlvz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:58:07 crc kubenswrapper[4970]: I0930 09:58:07.907693 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4eac6509-7889-4976-bcc4-bf65486c098f" (UID: "4eac6509-7889-4976-bcc4-bf65486c098f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.002853 4970 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.002889 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlvz5\" (UniqueName: \"kubernetes.io/projected/4eac6509-7889-4976-bcc4-bf65486c098f-kube-api-access-vlvz5\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.002902 4970 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4eac6509-7889-4976-bcc4-bf65486c098f-console-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.002911 4970 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4eac6509-7889-4976-bcc4-bf65486c098f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.182278 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9"] Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.448070 4970 generic.go:334] "Generic (PLEG): container finished" podID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerID="e5b9f1ffea44fefa44c3520df34c625b40610eb0c858ffa56c88fcd19c1367b0" exitCode=0 Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.448171 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" event={"ID":"1f33dea7-5310-40ed-9afc-243a4353a42b","Type":"ContainerDied","Data":"e5b9f1ffea44fefa44c3520df34c625b40610eb0c858ffa56c88fcd19c1367b0"} Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.448206 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" event={"ID":"1f33dea7-5310-40ed-9afc-243a4353a42b","Type":"ContainerStarted","Data":"d57c344aa50afe7c129eb4f799a6eacfa207500503963e66bd345997f8fe6aca"} Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.450838 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-c8bs2_4eac6509-7889-4976-bcc4-bf65486c098f/console/0.log" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.450883 4970 generic.go:334] "Generic (PLEG): container finished" podID="4eac6509-7889-4976-bcc4-bf65486c098f" containerID="b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35" exitCode=2 Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.450919 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c8bs2" event={"ID":"4eac6509-7889-4976-bcc4-bf65486c098f","Type":"ContainerDied","Data":"b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35"} Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.450946 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-c8bs2" event={"ID":"4eac6509-7889-4976-bcc4-bf65486c098f","Type":"ContainerDied","Data":"43626e403f9ed4538e4bd77f9e10261899ad1cf9b69bb2133b096bae76419b2a"} Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.450966 4970 scope.go:117] "RemoveContainer" containerID="b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.451072 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-c8bs2" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.478674 4970 scope.go:117] "RemoveContainer" containerID="b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35" Sep 30 09:58:08 crc kubenswrapper[4970]: E0930 09:58:08.479410 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35\": container with ID starting with b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35 not found: ID does not exist" containerID="b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.479455 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35"} err="failed to get container status \"b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35\": rpc error: code = NotFound desc = could not find container \"b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35\": container with ID starting with b4df6212f5378911bd16a8e8f73ec7d44d91ea90904923071970d813259ccd35 not found: ID does not exist" Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.503639 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-c8bs2"] Sep 30 09:58:08 crc kubenswrapper[4970]: I0930 09:58:08.510318 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-c8bs2"] Sep 30 09:58:09 crc kubenswrapper[4970]: I0930 09:58:09.679444 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" path="/var/lib/kubelet/pods/4eac6509-7889-4976-bcc4-bf65486c098f/volumes" Sep 30 09:58:10 crc kubenswrapper[4970]: I0930 09:58:10.468816 4970 generic.go:334] "Generic (PLEG): container finished" podID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerID="82adc76896c3c672777292f2bf7b8501d1284c913e685415542f75573826242a" exitCode=0 Sep 30 09:58:10 crc kubenswrapper[4970]: I0930 09:58:10.468884 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" event={"ID":"1f33dea7-5310-40ed-9afc-243a4353a42b","Type":"ContainerDied","Data":"82adc76896c3c672777292f2bf7b8501d1284c913e685415542f75573826242a"} Sep 30 09:58:11 crc kubenswrapper[4970]: I0930 09:58:11.477923 4970 generic.go:334] "Generic (PLEG): container finished" podID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerID="e29fcc83cb826817d8c41881b3dc4bd70d7cf133b1d97b2041a8b5696473833e" exitCode=0 Sep 30 09:58:11 crc kubenswrapper[4970]: I0930 09:58:11.478075 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" event={"ID":"1f33dea7-5310-40ed-9afc-243a4353a42b","Type":"ContainerDied","Data":"e29fcc83cb826817d8c41881b3dc4bd70d7cf133b1d97b2041a8b5696473833e"} Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.755863 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.826590 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-util\") pod \"1f33dea7-5310-40ed-9afc-243a4353a42b\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.826684 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56qpt\" (UniqueName: \"kubernetes.io/projected/1f33dea7-5310-40ed-9afc-243a4353a42b-kube-api-access-56qpt\") pod \"1f33dea7-5310-40ed-9afc-243a4353a42b\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.826762 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-bundle\") pod \"1f33dea7-5310-40ed-9afc-243a4353a42b\" (UID: \"1f33dea7-5310-40ed-9afc-243a4353a42b\") " Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.829273 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-bundle" (OuterVolumeSpecName: "bundle") pod "1f33dea7-5310-40ed-9afc-243a4353a42b" (UID: "1f33dea7-5310-40ed-9afc-243a4353a42b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.837409 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f33dea7-5310-40ed-9afc-243a4353a42b-kube-api-access-56qpt" (OuterVolumeSpecName: "kube-api-access-56qpt") pod "1f33dea7-5310-40ed-9afc-243a4353a42b" (UID: "1f33dea7-5310-40ed-9afc-243a4353a42b"). InnerVolumeSpecName "kube-api-access-56qpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.846806 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-util" (OuterVolumeSpecName: "util") pod "1f33dea7-5310-40ed-9afc-243a4353a42b" (UID: "1f33dea7-5310-40ed-9afc-243a4353a42b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.929128 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-util\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.929165 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56qpt\" (UniqueName: \"kubernetes.io/projected/1f33dea7-5310-40ed-9afc-243a4353a42b-kube-api-access-56qpt\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:12 crc kubenswrapper[4970]: I0930 09:58:12.929175 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f33dea7-5310-40ed-9afc-243a4353a42b-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:13 crc kubenswrapper[4970]: I0930 09:58:13.495517 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" event={"ID":"1f33dea7-5310-40ed-9afc-243a4353a42b","Type":"ContainerDied","Data":"d57c344aa50afe7c129eb4f799a6eacfa207500503963e66bd345997f8fe6aca"} Sep 30 09:58:13 crc kubenswrapper[4970]: I0930 09:58:13.495574 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d57c344aa50afe7c129eb4f799a6eacfa207500503963e66bd345997f8fe6aca" Sep 30 09:58:13 crc kubenswrapper[4970]: I0930 09:58:13.495625 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.077668 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z"] Sep 30 09:58:23 crc kubenswrapper[4970]: E0930 09:58:23.078635 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="pull" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.078651 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="pull" Sep 30 09:58:23 crc kubenswrapper[4970]: E0930 09:58:23.078665 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="extract" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.078673 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="extract" Sep 30 09:58:23 crc kubenswrapper[4970]: E0930 09:58:23.078684 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" containerName="console" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.078692 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" containerName="console" Sep 30 09:58:23 crc kubenswrapper[4970]: E0930 09:58:23.078707 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="util" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.078714 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="util" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.078845 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f33dea7-5310-40ed-9afc-243a4353a42b" containerName="extract" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.078862 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eac6509-7889-4976-bcc4-bf65486c098f" containerName="console" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.079409 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.084132 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.084252 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.084848 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.085794 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.090817 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-24kpk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.150499 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z"] Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.179501 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tg8g\" (UniqueName: \"kubernetes.io/projected/44474490-8653-4ad2-8ae3-d4e089664fb8-kube-api-access-2tg8g\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.179594 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44474490-8653-4ad2-8ae3-d4e089664fb8-apiservice-cert\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.179644 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44474490-8653-4ad2-8ae3-d4e089664fb8-webhook-cert\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.280775 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44474490-8653-4ad2-8ae3-d4e089664fb8-webhook-cert\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.280874 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tg8g\" (UniqueName: \"kubernetes.io/projected/44474490-8653-4ad2-8ae3-d4e089664fb8-kube-api-access-2tg8g\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.280944 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44474490-8653-4ad2-8ae3-d4e089664fb8-apiservice-cert\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.289140 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/44474490-8653-4ad2-8ae3-d4e089664fb8-apiservice-cert\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.298894 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tg8g\" (UniqueName: \"kubernetes.io/projected/44474490-8653-4ad2-8ae3-d4e089664fb8-kube-api-access-2tg8g\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.300385 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/44474490-8653-4ad2-8ae3-d4e089664fb8-webhook-cert\") pod \"metallb-operator-controller-manager-5689865b7f-lzf5z\" (UID: \"44474490-8653-4ad2-8ae3-d4e089664fb8\") " pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.336810 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk"] Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.337928 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.341736 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.341951 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4v2fb" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.342646 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.358479 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk"] Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.401350 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.492064 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0365d978-934a-4079-98be-d612928d9496-webhook-cert\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.492413 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0365d978-934a-4079-98be-d612928d9496-apiservice-cert\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.492455 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9fpg\" (UniqueName: \"kubernetes.io/projected/0365d978-934a-4079-98be-d612928d9496-kube-api-access-h9fpg\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.593452 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0365d978-934a-4079-98be-d612928d9496-webhook-cert\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.593500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0365d978-934a-4079-98be-d612928d9496-apiservice-cert\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.593545 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9fpg\" (UniqueName: \"kubernetes.io/projected/0365d978-934a-4079-98be-d612928d9496-kube-api-access-h9fpg\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.603487 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0365d978-934a-4079-98be-d612928d9496-apiservice-cert\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.603574 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0365d978-934a-4079-98be-d612928d9496-webhook-cert\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.619416 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9fpg\" (UniqueName: \"kubernetes.io/projected/0365d978-934a-4079-98be-d612928d9496-kube-api-access-h9fpg\") pod \"metallb-operator-webhook-server-79d5d6bd79-dmktk\" (UID: \"0365d978-934a-4079-98be-d612928d9496\") " pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.653064 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.712329 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z"] Sep 30 09:58:23 crc kubenswrapper[4970]: I0930 09:58:23.975794 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk"] Sep 30 09:58:23 crc kubenswrapper[4970]: W0930 09:58:23.985408 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0365d978_934a_4079_98be_d612928d9496.slice/crio-e3a4cb696eb7e620963f7c6869aeebd51e113d9a6ed14f03635b46e33e671ce6 WatchSource:0}: Error finding container e3a4cb696eb7e620963f7c6869aeebd51e113d9a6ed14f03635b46e33e671ce6: Status 404 returned error can't find the container with id e3a4cb696eb7e620963f7c6869aeebd51e113d9a6ed14f03635b46e33e671ce6 Sep 30 09:58:24 crc kubenswrapper[4970]: I0930 09:58:24.576414 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" event={"ID":"0365d978-934a-4079-98be-d612928d9496","Type":"ContainerStarted","Data":"e3a4cb696eb7e620963f7c6869aeebd51e113d9a6ed14f03635b46e33e671ce6"} Sep 30 09:58:24 crc kubenswrapper[4970]: I0930 09:58:24.578036 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" event={"ID":"44474490-8653-4ad2-8ae3-d4e089664fb8","Type":"ContainerStarted","Data":"59d5a6f039d6f99da4f1ac1fb897e4e33b66503f320e8d3b88b4a73fe400933b"} Sep 30 09:58:29 crc kubenswrapper[4970]: I0930 09:58:29.617186 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" event={"ID":"0365d978-934a-4079-98be-d612928d9496","Type":"ContainerStarted","Data":"a89ee82adf0e241bfd601c91895f9c9fede2f69c4b600e350dc1ae6740b23e7a"} Sep 30 09:58:29 crc kubenswrapper[4970]: I0930 09:58:29.617843 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:29 crc kubenswrapper[4970]: I0930 09:58:29.619552 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" event={"ID":"44474490-8653-4ad2-8ae3-d4e089664fb8","Type":"ContainerStarted","Data":"6563533d5bbc3cd946a2dd2da56c2c63da62de11411f5dbc755b85383c8a3995"} Sep 30 09:58:29 crc kubenswrapper[4970]: I0930 09:58:29.619818 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:58:29 crc kubenswrapper[4970]: I0930 09:58:29.642562 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" podStartSLOduration=1.713074698 podStartE2EDuration="6.642534788s" podCreationTimestamp="2025-09-30 09:58:23 +0000 UTC" firstStartedPulling="2025-09-30 09:58:23.988742572 +0000 UTC m=+717.060593506" lastFinishedPulling="2025-09-30 09:58:28.918202652 +0000 UTC m=+721.990053596" observedRunningTime="2025-09-30 09:58:29.641920461 +0000 UTC m=+722.713771395" watchObservedRunningTime="2025-09-30 09:58:29.642534788 +0000 UTC m=+722.714385722" Sep 30 09:58:29 crc kubenswrapper[4970]: I0930 09:58:29.672295 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" podStartSLOduration=1.551947757 podStartE2EDuration="6.672270584s" podCreationTimestamp="2025-09-30 09:58:23 +0000 UTC" firstStartedPulling="2025-09-30 09:58:23.76637763 +0000 UTC m=+716.838228564" lastFinishedPulling="2025-09-30 09:58:28.886700457 +0000 UTC m=+721.958551391" observedRunningTime="2025-09-30 09:58:29.669068708 +0000 UTC m=+722.740919642" watchObservedRunningTime="2025-09-30 09:58:29.672270584 +0000 UTC m=+722.744121508" Sep 30 09:58:43 crc kubenswrapper[4970]: I0930 09:58:43.660274 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79d5d6bd79-dmktk" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.189157 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nmtqv"] Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.190168 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" podUID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" containerName="controller-manager" containerID="cri-o://0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d" gracePeriod=30 Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.335852 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r"] Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.348619 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" podUID="cd9dda51-d5d7-46e1-886d-865955b5bd39" containerName="route-controller-manager" containerID="cri-o://8704e70dff1fe4e24ba4a4b0e05fad68fc671ff00d94de39d910687a1ee205dc" gracePeriod=30 Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.723609 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.786508 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-config\") pod \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.786584 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-serving-cert\") pod \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.786616 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zqc9\" (UniqueName: \"kubernetes.io/projected/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-kube-api-access-7zqc9\") pod \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.786667 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-proxy-ca-bundles\") pod \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.786733 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-client-ca\") pod \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\" (UID: \"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.787768 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-config" (OuterVolumeSpecName: "config") pod "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" (UID: "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.789281 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" (UID: "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.792283 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-client-ca" (OuterVolumeSpecName: "client-ca") pod "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" (UID: "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.795560 4970 generic.go:334] "Generic (PLEG): container finished" podID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" containerID="0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d" exitCode=0 Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.795647 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" event={"ID":"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f","Type":"ContainerDied","Data":"0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d"} Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.795681 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" event={"ID":"169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f","Type":"ContainerDied","Data":"f8d256c1d074f34b9aca1a574427e3feb774edaa99c65a5f8106e26001474d30"} Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.795704 4970 scope.go:117] "RemoveContainer" containerID="0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.795823 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-nmtqv" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.798429 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" (UID: "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.800598 4970 generic.go:334] "Generic (PLEG): container finished" podID="cd9dda51-d5d7-46e1-886d-865955b5bd39" containerID="8704e70dff1fe4e24ba4a4b0e05fad68fc671ff00d94de39d910687a1ee205dc" exitCode=0 Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.800644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" event={"ID":"cd9dda51-d5d7-46e1-886d-865955b5bd39","Type":"ContainerDied","Data":"8704e70dff1fe4e24ba4a4b0e05fad68fc671ff00d94de39d910687a1ee205dc"} Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.803643 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-kube-api-access-7zqc9" (OuterVolumeSpecName: "kube-api-access-7zqc9") pod "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" (UID: "169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f"). InnerVolumeSpecName "kube-api-access-7zqc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.817007 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.823356 4970 scope.go:117] "RemoveContainer" containerID="0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d" Sep 30 09:58:56 crc kubenswrapper[4970]: E0930 09:58:56.823830 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d\": container with ID starting with 0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d not found: ID does not exist" containerID="0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.823886 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d"} err="failed to get container status \"0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d\": rpc error: code = NotFound desc = could not find container \"0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d\": container with ID starting with 0c7fa993dcc3941316dad8fdec841404277c31c4ef6c0b2bf3c19e281cad058d not found: ID does not exist" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.888773 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9dda51-d5d7-46e1-886d-865955b5bd39-serving-cert\") pod \"cd9dda51-d5d7-46e1-886d-865955b5bd39\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.888867 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jh8bg\" (UniqueName: \"kubernetes.io/projected/cd9dda51-d5d7-46e1-886d-865955b5bd39-kube-api-access-jh8bg\") pod \"cd9dda51-d5d7-46e1-886d-865955b5bd39\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.888915 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-client-ca\") pod \"cd9dda51-d5d7-46e1-886d-865955b5bd39\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.888940 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-config\") pod \"cd9dda51-d5d7-46e1-886d-865955b5bd39\" (UID: \"cd9dda51-d5d7-46e1-886d-865955b5bd39\") " Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.889271 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.889291 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.889302 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zqc9\" (UniqueName: \"kubernetes.io/projected/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-kube-api-access-7zqc9\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.889315 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.889326 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.890015 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-client-ca" (OuterVolumeSpecName: "client-ca") pod "cd9dda51-d5d7-46e1-886d-865955b5bd39" (UID: "cd9dda51-d5d7-46e1-886d-865955b5bd39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.890032 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-config" (OuterVolumeSpecName: "config") pod "cd9dda51-d5d7-46e1-886d-865955b5bd39" (UID: "cd9dda51-d5d7-46e1-886d-865955b5bd39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.894584 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd9dda51-d5d7-46e1-886d-865955b5bd39-kube-api-access-jh8bg" (OuterVolumeSpecName: "kube-api-access-jh8bg") pod "cd9dda51-d5d7-46e1-886d-865955b5bd39" (UID: "cd9dda51-d5d7-46e1-886d-865955b5bd39"). InnerVolumeSpecName "kube-api-access-jh8bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.897552 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd9dda51-d5d7-46e1-886d-865955b5bd39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cd9dda51-d5d7-46e1-886d-865955b5bd39" (UID: "cd9dda51-d5d7-46e1-886d-865955b5bd39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.991063 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd9dda51-d5d7-46e1-886d-865955b5bd39-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.991109 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jh8bg\" (UniqueName: \"kubernetes.io/projected/cd9dda51-d5d7-46e1-886d-865955b5bd39-kube-api-access-jh8bg\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.991127 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:56 crc kubenswrapper[4970]: I0930 09:58:56.991144 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd9dda51-d5d7-46e1-886d-865955b5bd39-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.125625 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nmtqv"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.129438 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-nmtqv"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.663783 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m"] Sep 30 09:58:57 crc kubenswrapper[4970]: E0930 09:58:57.664192 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd9dda51-d5d7-46e1-886d-865955b5bd39" containerName="route-controller-manager" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.664212 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd9dda51-d5d7-46e1-886d-865955b5bd39" containerName="route-controller-manager" Sep 30 09:58:57 crc kubenswrapper[4970]: E0930 09:58:57.664249 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" containerName="controller-manager" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.664261 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" containerName="controller-manager" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.664396 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" containerName="controller-manager" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.664409 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd9dda51-d5d7-46e1-886d-865955b5bd39" containerName="route-controller-manager" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.665029 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.666534 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.667454 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.672127 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.672629 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.672877 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.673261 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.673352 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.673404 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.678359 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.681961 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f" path="/var/lib/kubelet/pods/169d1b8f-20b4-4e29-88d9-5bcbe32ccb4f/volumes" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.682815 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.702592 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/193ec11b-5b83-4ef5-b490-d64d5dabfe17-serving-cert\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.702650 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-config\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.702823 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940ba6ad-7b4b-4652-b428-abdb05ed735b-serving-cert\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.702867 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-proxy-ca-bundles\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.702973 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqc4t\" (UniqueName: \"kubernetes.io/projected/940ba6ad-7b4b-4652-b428-abdb05ed735b-kube-api-access-pqc4t\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.703053 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/193ec11b-5b83-4ef5-b490-d64d5dabfe17-client-ca\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.703084 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfkz7\" (UniqueName: \"kubernetes.io/projected/193ec11b-5b83-4ef5-b490-d64d5dabfe17-kube-api-access-mfkz7\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.703134 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-client-ca\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.703222 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193ec11b-5b83-4ef5-b490-d64d5dabfe17-config\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.707608 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.806938 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/193ec11b-5b83-4ef5-b490-d64d5dabfe17-serving-cert\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.807348 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-config\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.807518 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940ba6ad-7b4b-4652-b428-abdb05ed735b-serving-cert\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.807745 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-proxy-ca-bundles\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.807892 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqc4t\" (UniqueName: \"kubernetes.io/projected/940ba6ad-7b4b-4652-b428-abdb05ed735b-kube-api-access-pqc4t\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.808057 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/193ec11b-5b83-4ef5-b490-d64d5dabfe17-client-ca\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.808213 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfkz7\" (UniqueName: \"kubernetes.io/projected/193ec11b-5b83-4ef5-b490-d64d5dabfe17-kube-api-access-mfkz7\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.808423 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-client-ca\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.808607 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193ec11b-5b83-4ef5-b490-d64d5dabfe17-config\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.809590 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-config\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.809761 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-proxy-ca-bundles\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.810801 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/193ec11b-5b83-4ef5-b490-d64d5dabfe17-client-ca\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.811060 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/193ec11b-5b83-4ef5-b490-d64d5dabfe17-config\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.812234 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-client-ca\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.813348 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/193ec11b-5b83-4ef5-b490-d64d5dabfe17-serving-cert\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.813906 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940ba6ad-7b4b-4652-b428-abdb05ed735b-serving-cert\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.816348 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" event={"ID":"cd9dda51-d5d7-46e1-886d-865955b5bd39","Type":"ContainerDied","Data":"3b2c98bf5dc524fcd6fef681f27c749b0d8254b9a106792b719ccda2ac7d6daf"} Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.816379 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.816457 4970 scope.go:117] "RemoveContainer" containerID="8704e70dff1fe4e24ba4a4b0e05fad68fc671ff00d94de39d910687a1ee205dc" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.837953 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfkz7\" (UniqueName: \"kubernetes.io/projected/193ec11b-5b83-4ef5-b490-d64d5dabfe17-kube-api-access-mfkz7\") pod \"route-controller-manager-6894c86d8f-dzj4m\" (UID: \"193ec11b-5b83-4ef5-b490-d64d5dabfe17\") " pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.837975 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqc4t\" (UniqueName: \"kubernetes.io/projected/940ba6ad-7b4b-4652-b428-abdb05ed735b-kube-api-access-pqc4t\") pod \"controller-manager-5c7ff8fb7c-skx9m\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.876340 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.881047 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-zwd8r"] Sep 30 09:58:57 crc kubenswrapper[4970]: I0930 09:58:57.989788 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.000010 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.158729 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m"] Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.292748 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m"] Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.347816 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m"] Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.828102 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" event={"ID":"940ba6ad-7b4b-4652-b428-abdb05ed735b","Type":"ContainerStarted","Data":"6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a"} Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.828178 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" event={"ID":"940ba6ad-7b4b-4652-b428-abdb05ed735b","Type":"ContainerStarted","Data":"59fdb9791c312dc218b7791810f7066ef44bf0b50c8630d020e919065c2e789f"} Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.828302 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.829454 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" podUID="940ba6ad-7b4b-4652-b428-abdb05ed735b" containerName="controller-manager" containerID="cri-o://6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a" gracePeriod=30 Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.835869 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" event={"ID":"193ec11b-5b83-4ef5-b490-d64d5dabfe17","Type":"ContainerStarted","Data":"96d0ce678196b0accadcc6730d4ad6d60abd3d75ef5198ba4747e7f78ceab9cc"} Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.835945 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" event={"ID":"193ec11b-5b83-4ef5-b490-d64d5dabfe17","Type":"ContainerStarted","Data":"b14d16271a9e683cb9951ca6cdfef2f35c78695e0f7f182c202aff53da5d69ff"} Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.836169 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.836254 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.858414 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" podStartSLOduration=2.858392577 podStartE2EDuration="2.858392577s" podCreationTimestamp="2025-09-30 09:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:58:58.85368118 +0000 UTC m=+751.925532124" watchObservedRunningTime="2025-09-30 09:58:58.858392577 +0000 UTC m=+751.930243511" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.903218 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" podStartSLOduration=2.903195463 podStartE2EDuration="2.903195463s" podCreationTimestamp="2025-09-30 09:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:58:58.878469572 +0000 UTC m=+751.950320506" watchObservedRunningTime="2025-09-30 09:58:58.903195463 +0000 UTC m=+751.975046387" Sep 30 09:58:58 crc kubenswrapper[4970]: I0930 09:58:58.981249 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6894c86d8f-dzj4m" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.209428 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.248395 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77b6b4b6f8-7djww"] Sep 30 09:58:59 crc kubenswrapper[4970]: E0930 09:58:59.248721 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="940ba6ad-7b4b-4652-b428-abdb05ed735b" containerName="controller-manager" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.248747 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="940ba6ad-7b4b-4652-b428-abdb05ed735b" containerName="controller-manager" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.248894 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="940ba6ad-7b4b-4652-b428-abdb05ed735b" containerName="controller-manager" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.249469 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.263463 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b6b4b6f8-7djww"] Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.330622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqc4t\" (UniqueName: \"kubernetes.io/projected/940ba6ad-7b4b-4652-b428-abdb05ed735b-kube-api-access-pqc4t\") pod \"940ba6ad-7b4b-4652-b428-abdb05ed735b\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.330754 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940ba6ad-7b4b-4652-b428-abdb05ed735b-serving-cert\") pod \"940ba6ad-7b4b-4652-b428-abdb05ed735b\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.330814 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-client-ca\") pod \"940ba6ad-7b4b-4652-b428-abdb05ed735b\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.330912 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-proxy-ca-bundles\") pod \"940ba6ad-7b4b-4652-b428-abdb05ed735b\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.330938 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-config\") pod \"940ba6ad-7b4b-4652-b428-abdb05ed735b\" (UID: \"940ba6ad-7b4b-4652-b428-abdb05ed735b\") " Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331196 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blfdr\" (UniqueName: \"kubernetes.io/projected/00352fce-306b-486b-9c7d-9aff9ce484da-kube-api-access-blfdr\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331248 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-proxy-ca-bundles\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331274 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-config\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331311 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00352fce-306b-486b-9c7d-9aff9ce484da-serving-cert\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331334 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-client-ca\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331860 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-client-ca" (OuterVolumeSpecName: "client-ca") pod "940ba6ad-7b4b-4652-b428-abdb05ed735b" (UID: "940ba6ad-7b4b-4652-b428-abdb05ed735b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.331927 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "940ba6ad-7b4b-4652-b428-abdb05ed735b" (UID: "940ba6ad-7b4b-4652-b428-abdb05ed735b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.332053 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-config" (OuterVolumeSpecName: "config") pod "940ba6ad-7b4b-4652-b428-abdb05ed735b" (UID: "940ba6ad-7b4b-4652-b428-abdb05ed735b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.337828 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/940ba6ad-7b4b-4652-b428-abdb05ed735b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "940ba6ad-7b4b-4652-b428-abdb05ed735b" (UID: "940ba6ad-7b4b-4652-b428-abdb05ed735b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.337947 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/940ba6ad-7b4b-4652-b428-abdb05ed735b-kube-api-access-pqc4t" (OuterVolumeSpecName: "kube-api-access-pqc4t") pod "940ba6ad-7b4b-4652-b428-abdb05ed735b" (UID: "940ba6ad-7b4b-4652-b428-abdb05ed735b"). InnerVolumeSpecName "kube-api-access-pqc4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.432790 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00352fce-306b-486b-9c7d-9aff9ce484da-serving-cert\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.432877 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-client-ca\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.432958 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blfdr\" (UniqueName: \"kubernetes.io/projected/00352fce-306b-486b-9c7d-9aff9ce484da-kube-api-access-blfdr\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433010 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-proxy-ca-bundles\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433029 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-config\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433107 4970 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433125 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-config\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433138 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqc4t\" (UniqueName: \"kubernetes.io/projected/940ba6ad-7b4b-4652-b428-abdb05ed735b-kube-api-access-pqc4t\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433152 4970 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/940ba6ad-7b4b-4652-b428-abdb05ed735b-serving-cert\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.433163 4970 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/940ba6ad-7b4b-4652-b428-abdb05ed735b-client-ca\") on node \"crc\" DevicePath \"\"" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.434053 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-client-ca\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.434467 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-config\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.434464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/00352fce-306b-486b-9c7d-9aff9ce484da-proxy-ca-bundles\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.437681 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/00352fce-306b-486b-9c7d-9aff9ce484da-serving-cert\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.452475 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blfdr\" (UniqueName: \"kubernetes.io/projected/00352fce-306b-486b-9c7d-9aff9ce484da-kube-api-access-blfdr\") pod \"controller-manager-77b6b4b6f8-7djww\" (UID: \"00352fce-306b-486b-9c7d-9aff9ce484da\") " pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.571412 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.676808 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd9dda51-d5d7-46e1-886d-865955b5bd39" path="/var/lib/kubelet/pods/cd9dda51-d5d7-46e1-886d-865955b5bd39/volumes" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.857960 4970 generic.go:334] "Generic (PLEG): container finished" podID="940ba6ad-7b4b-4652-b428-abdb05ed735b" containerID="6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a" exitCode=0 Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.858239 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.858982 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" event={"ID":"940ba6ad-7b4b-4652-b428-abdb05ed735b","Type":"ContainerDied","Data":"6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a"} Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.859380 4970 scope.go:117] "RemoveContainer" containerID="6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.859048 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m" event={"ID":"940ba6ad-7b4b-4652-b428-abdb05ed735b","Type":"ContainerDied","Data":"59fdb9791c312dc218b7791810f7066ef44bf0b50c8630d020e919065c2e789f"} Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.887970 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m"] Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.889046 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c7ff8fb7c-skx9m"] Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.901494 4970 scope.go:117] "RemoveContainer" containerID="6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a" Sep 30 09:58:59 crc kubenswrapper[4970]: E0930 09:58:59.903192 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a\": container with ID starting with 6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a not found: ID does not exist" containerID="6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.903240 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a"} err="failed to get container status \"6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a\": rpc error: code = NotFound desc = could not find container \"6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a\": container with ID starting with 6ff4e8262b954c6e6aa5763dc6d513097fb0246d939d0bff0a43358b39697d4a not found: ID does not exist" Sep 30 09:58:59 crc kubenswrapper[4970]: I0930 09:58:59.991148 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77b6b4b6f8-7djww"] Sep 30 09:58:59 crc kubenswrapper[4970]: W0930 09:58:59.998451 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00352fce_306b_486b_9c7d_9aff9ce484da.slice/crio-d2bc431eebf797b6c06adbf77e3744389cd2ac51b995e39956c7751b45ace6fd WatchSource:0}: Error finding container d2bc431eebf797b6c06adbf77e3744389cd2ac51b995e39956c7751b45ace6fd: Status 404 returned error can't find the container with id d2bc431eebf797b6c06adbf77e3744389cd2ac51b995e39956c7751b45ace6fd Sep 30 09:59:00 crc kubenswrapper[4970]: I0930 09:59:00.866676 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" event={"ID":"00352fce-306b-486b-9c7d-9aff9ce484da","Type":"ContainerStarted","Data":"0c670e45039a44654fdd16a2a5940cac52302484ce0ab2700291fedd13f73eea"} Sep 30 09:59:00 crc kubenswrapper[4970]: I0930 09:59:00.867150 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" event={"ID":"00352fce-306b-486b-9c7d-9aff9ce484da","Type":"ContainerStarted","Data":"d2bc431eebf797b6c06adbf77e3744389cd2ac51b995e39956c7751b45ace6fd"} Sep 30 09:59:00 crc kubenswrapper[4970]: I0930 09:59:00.895289 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" podStartSLOduration=2.895264768 podStartE2EDuration="2.895264768s" podCreationTimestamp="2025-09-30 09:58:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:59:00.891166027 +0000 UTC m=+753.963016951" watchObservedRunningTime="2025-09-30 09:59:00.895264768 +0000 UTC m=+753.967115702" Sep 30 09:59:01 crc kubenswrapper[4970]: I0930 09:59:01.681290 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="940ba6ad-7b4b-4652-b428-abdb05ed735b" path="/var/lib/kubelet/pods/940ba6ad-7b4b-4652-b428-abdb05ed735b/volumes" Sep 30 09:59:01 crc kubenswrapper[4970]: I0930 09:59:01.875931 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:59:01 crc kubenswrapper[4970]: I0930 09:59:01.882105 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77b6b4b6f8-7djww" Sep 30 09:59:02 crc kubenswrapper[4970]: I0930 09:59:02.388950 4970 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Sep 30 09:59:03 crc kubenswrapper[4970]: I0930 09:59:03.406579 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5689865b7f-lzf5z" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.211402 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-z8ds7"] Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.214791 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.216371 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4"] Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.217148 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.217698 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.217956 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jfwzs" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.219938 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.222367 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.232952 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4"] Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.305961 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-sockets\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.306430 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-startup\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.306554 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics-certs\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.306664 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwvtw\" (UniqueName: \"kubernetes.io/projected/2c47cc30-570a-4a61-8025-f1f12067fa0b-kube-api-access-jwvtw\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.306826 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-conf\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.306952 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.307119 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s22nz\" (UniqueName: \"kubernetes.io/projected/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-kube-api-access-s22nz\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.307255 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-cert\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.307378 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-reloader\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.319821 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-f6gvx"] Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.341702 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.347433 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.347511 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.348544 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.349446 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5d688f5ffc-vzmvh"] Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.357443 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-j49rp" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.361338 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-vzmvh"] Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.361475 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.369782 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408678 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-metrics-certs\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408767 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-sockets\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408813 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztss4\" (UniqueName: \"kubernetes.io/projected/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-kube-api-access-ztss4\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408835 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-cert\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408858 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408876 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-startup\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408892 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics-certs\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwvtw\" (UniqueName: \"kubernetes.io/projected/2c47cc30-570a-4a61-8025-f1f12067fa0b-kube-api-access-jwvtw\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408930 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metallb-excludel2\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-conf\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.408978 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409025 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dxgq\" (UniqueName: \"kubernetes.io/projected/0c72cc58-2ee8-414b-a656-a2623e1664f0-kube-api-access-2dxgq\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409052 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s22nz\" (UniqueName: \"kubernetes.io/projected/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-kube-api-access-s22nz\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409077 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-cert\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409093 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metrics-certs\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409109 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-reloader\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409598 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-reloader\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.409810 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-sockets\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.410680 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-startup\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.410799 4970 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.410855 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics-certs podName:2c47cc30-570a-4a61-8025-f1f12067fa0b nodeName:}" failed. No retries permitted until 2025-09-30 09:59:04.910839377 +0000 UTC m=+757.982690311 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics-certs") pod "frr-k8s-z8ds7" (UID: "2c47cc30-570a-4a61-8025-f1f12067fa0b") : secret "frr-k8s-certs-secret" not found Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.411181 4970 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.411305 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-cert podName:7c5f78f9-5ebd-434d-82c0-df6af4bc483b nodeName:}" failed. No retries permitted until 2025-09-30 09:59:04.911275628 +0000 UTC m=+757.983126742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-cert") pod "frr-k8s-webhook-server-5478bdb765-p7rc4" (UID: "7c5f78f9-5ebd-434d-82c0-df6af4bc483b") : secret "frr-k8s-webhook-server-cert" not found Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.411461 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-frr-conf\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.411648 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.443206 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s22nz\" (UniqueName: \"kubernetes.io/projected/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-kube-api-access-s22nz\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.452202 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwvtw\" (UniqueName: \"kubernetes.io/projected/2c47cc30-570a-4a61-8025-f1f12067fa0b-kube-api-access-jwvtw\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.510948 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metallb-excludel2\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.511542 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dxgq\" (UniqueName: \"kubernetes.io/projected/0c72cc58-2ee8-414b-a656-a2623e1664f0-kube-api-access-2dxgq\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.511615 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metrics-certs\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.511645 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-metrics-certs\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.511890 4970 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.511759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztss4\" (UniqueName: \"kubernetes.io/projected/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-kube-api-access-ztss4\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.511919 4970 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.512235 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-cert\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.512243 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metallb-excludel2\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.512242 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metrics-certs podName:fef9dca8-f780-4d0b-b7b8-68cd4f13de1a nodeName:}" failed. No retries permitted until 2025-09-30 09:59:05.012210575 +0000 UTC m=+758.084061699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metrics-certs") pod "speaker-f6gvx" (UID: "fef9dca8-f780-4d0b-b7b8-68cd4f13de1a") : secret "speaker-certs-secret" not found Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.512480 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-metrics-certs podName:0c72cc58-2ee8-414b-a656-a2623e1664f0 nodeName:}" failed. No retries permitted until 2025-09-30 09:59:05.012457862 +0000 UTC m=+758.084308796 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-metrics-certs") pod "controller-5d688f5ffc-vzmvh" (UID: "0c72cc58-2ee8-414b-a656-a2623e1664f0") : secret "controller-certs-secret" not found Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.512506 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.512610 4970 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 30 09:59:04 crc kubenswrapper[4970]: E0930 09:59:04.512665 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist podName:fef9dca8-f780-4d0b-b7b8-68cd4f13de1a nodeName:}" failed. No retries permitted until 2025-09-30 09:59:05.012656207 +0000 UTC m=+758.084507141 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist") pod "speaker-f6gvx" (UID: "fef9dca8-f780-4d0b-b7b8-68cd4f13de1a") : secret "metallb-memberlist" not found Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.519132 4970 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.526640 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-cert\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.544684 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dxgq\" (UniqueName: \"kubernetes.io/projected/0c72cc58-2ee8-414b-a656-a2623e1664f0-kube-api-access-2dxgq\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.544720 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztss4\" (UniqueName: \"kubernetes.io/projected/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-kube-api-access-ztss4\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.918819 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics-certs\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.918901 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-cert\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.922632 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c47cc30-570a-4a61-8025-f1f12067fa0b-metrics-certs\") pod \"frr-k8s-z8ds7\" (UID: \"2c47cc30-570a-4a61-8025-f1f12067fa0b\") " pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:04 crc kubenswrapper[4970]: I0930 09:59:04.923692 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c5f78f9-5ebd-434d-82c0-df6af4bc483b-cert\") pod \"frr-k8s-webhook-server-5478bdb765-p7rc4\" (UID: \"7c5f78f9-5ebd-434d-82c0-df6af4bc483b\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.020926 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metrics-certs\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.021030 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-metrics-certs\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.021117 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:05 crc kubenswrapper[4970]: E0930 09:59:05.021286 4970 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Sep 30 09:59:05 crc kubenswrapper[4970]: E0930 09:59:05.021366 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist podName:fef9dca8-f780-4d0b-b7b8-68cd4f13de1a nodeName:}" failed. No retries permitted until 2025-09-30 09:59:06.021343386 +0000 UTC m=+759.093194320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist") pod "speaker-f6gvx" (UID: "fef9dca8-f780-4d0b-b7b8-68cd4f13de1a") : secret "metallb-memberlist" not found Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.025052 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-metrics-certs\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.026043 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0c72cc58-2ee8-414b-a656-a2623e1664f0-metrics-certs\") pod \"controller-5d688f5ffc-vzmvh\" (UID: \"0c72cc58-2ee8-414b-a656-a2623e1664f0\") " pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.136815 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.144044 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.291746 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.614731 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4"] Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.768501 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-vzmvh"] Sep 30 09:59:05 crc kubenswrapper[4970]: W0930 09:59:05.777558 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c72cc58_2ee8_414b_a656_a2623e1664f0.slice/crio-58f3a05a5fec7686b9ad7e174ff2f4b1caedbd0eab2eb336405cf15cc9aa8423 WatchSource:0}: Error finding container 58f3a05a5fec7686b9ad7e174ff2f4b1caedbd0eab2eb336405cf15cc9aa8423: Status 404 returned error can't find the container with id 58f3a05a5fec7686b9ad7e174ff2f4b1caedbd0eab2eb336405cf15cc9aa8423 Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.903574 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"6d6492c7169c9b2e14b1afbf8803b7d5af3abf18e34505bdb185c9f0b37b0462"} Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.905530 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-vzmvh" event={"ID":"0c72cc58-2ee8-414b-a656-a2623e1664f0","Type":"ContainerStarted","Data":"58f3a05a5fec7686b9ad7e174ff2f4b1caedbd0eab2eb336405cf15cc9aa8423"} Sep 30 09:59:05 crc kubenswrapper[4970]: I0930 09:59:05.907688 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" event={"ID":"7c5f78f9-5ebd-434d-82c0-df6af4bc483b","Type":"ContainerStarted","Data":"c40eb9f444a9b4999a65c462e7e47f23bd7ae53f418e5066b35f2b82242d08cf"} Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.039696 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.048422 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/fef9dca8-f780-4d0b-b7b8-68cd4f13de1a-memberlist\") pod \"speaker-f6gvx\" (UID: \"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a\") " pod="metallb-system/speaker-f6gvx" Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.181442 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-f6gvx" Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.929831 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-vzmvh" event={"ID":"0c72cc58-2ee8-414b-a656-a2623e1664f0","Type":"ContainerStarted","Data":"10711e9403c6a202a733ac5a81df1d57e6e89925c0d14beab4199e8715e0c999"} Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.930279 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-vzmvh" event={"ID":"0c72cc58-2ee8-414b-a656-a2623e1664f0","Type":"ContainerStarted","Data":"e6e5f04861a1250ce62c35473b9a97ca01eaa0abd29a9817e3ed589f5711bc2c"} Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.930306 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.936300 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f6gvx" event={"ID":"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a","Type":"ContainerStarted","Data":"63f6215348678b39661974e27e5f68981348de3f57a5566e5b475ef3661a2c80"} Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.936360 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f6gvx" event={"ID":"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a","Type":"ContainerStarted","Data":"3eb89d4548b88cf80f3e63725c291991531f275bcf62efa76126ac66ff284e0c"} Sep 30 09:59:06 crc kubenswrapper[4970]: I0930 09:59:06.965286 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5d688f5ffc-vzmvh" podStartSLOduration=2.965260065 podStartE2EDuration="2.965260065s" podCreationTimestamp="2025-09-30 09:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:59:06.963811966 +0000 UTC m=+760.035662900" watchObservedRunningTime="2025-09-30 09:59:06.965260065 +0000 UTC m=+760.037110999" Sep 30 09:59:07 crc kubenswrapper[4970]: I0930 09:59:07.950942 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-f6gvx" event={"ID":"fef9dca8-f780-4d0b-b7b8-68cd4f13de1a","Type":"ContainerStarted","Data":"5265ffbdd06d2e68c0f0807ca3528044f4bed073852fe4cc0007f0398d29e628"} Sep 30 09:59:07 crc kubenswrapper[4970]: I0930 09:59:07.952150 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-f6gvx" Sep 30 09:59:07 crc kubenswrapper[4970]: I0930 09:59:07.992161 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-f6gvx" podStartSLOduration=3.992133129 podStartE2EDuration="3.992133129s" podCreationTimestamp="2025-09-30 09:59:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 09:59:07.988802679 +0000 UTC m=+761.060653613" watchObservedRunningTime="2025-09-30 09:59:07.992133129 +0000 UTC m=+761.063984073" Sep 30 09:59:14 crc kubenswrapper[4970]: I0930 09:59:14.996840 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" event={"ID":"7c5f78f9-5ebd-434d-82c0-df6af4bc483b","Type":"ContainerStarted","Data":"4d1d46b290624dffa7641e10e599f7c8aa6fd966b46f52522324a81019ecae09"} Sep 30 09:59:14 crc kubenswrapper[4970]: I0930 09:59:14.997512 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:14 crc kubenswrapper[4970]: I0930 09:59:14.999537 4970 generic.go:334] "Generic (PLEG): container finished" podID="2c47cc30-570a-4a61-8025-f1f12067fa0b" containerID="49a2fd382e92b5ab35a11a817fd8d6fbf93fc63107065002c27eaf7acfe2fe94" exitCode=0 Sep 30 09:59:14 crc kubenswrapper[4970]: I0930 09:59:14.999601 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerDied","Data":"49a2fd382e92b5ab35a11a817fd8d6fbf93fc63107065002c27eaf7acfe2fe94"} Sep 30 09:59:15 crc kubenswrapper[4970]: I0930 09:59:15.029515 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" podStartSLOduration=2.640778133 podStartE2EDuration="11.029485866s" podCreationTimestamp="2025-09-30 09:59:04 +0000 UTC" firstStartedPulling="2025-09-30 09:59:05.631202938 +0000 UTC m=+758.703053872" lastFinishedPulling="2025-09-30 09:59:14.019910671 +0000 UTC m=+767.091761605" observedRunningTime="2025-09-30 09:59:15.018282002 +0000 UTC m=+768.090132966" watchObservedRunningTime="2025-09-30 09:59:15.029485866 +0000 UTC m=+768.101336810" Sep 30 09:59:15 crc kubenswrapper[4970]: I0930 09:59:15.298075 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5d688f5ffc-vzmvh" Sep 30 09:59:16 crc kubenswrapper[4970]: I0930 09:59:16.007790 4970 generic.go:334] "Generic (PLEG): container finished" podID="2c47cc30-570a-4a61-8025-f1f12067fa0b" containerID="c5552d8088740700f3b737d4fe39b7de10cecb42eab429e4fd8cf0da20deab34" exitCode=0 Sep 30 09:59:16 crc kubenswrapper[4970]: I0930 09:59:16.007850 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerDied","Data":"c5552d8088740700f3b737d4fe39b7de10cecb42eab429e4fd8cf0da20deab34"} Sep 30 09:59:16 crc kubenswrapper[4970]: I0930 09:59:16.185295 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-f6gvx" Sep 30 09:59:17 crc kubenswrapper[4970]: I0930 09:59:17.016816 4970 generic.go:334] "Generic (PLEG): container finished" podID="2c47cc30-570a-4a61-8025-f1f12067fa0b" containerID="45cea92f413aef2b4ab4a3a89dea2787fd07bcd8ab170d30691ac501cd00925b" exitCode=0 Sep 30 09:59:17 crc kubenswrapper[4970]: I0930 09:59:17.016876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerDied","Data":"45cea92f413aef2b4ab4a3a89dea2787fd07bcd8ab170d30691ac501cd00925b"} Sep 30 09:59:18 crc kubenswrapper[4970]: I0930 09:59:18.033263 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"eb5036ca3facbf0e5b954acf454a93db4cdafc9046d922fb60782fd701035879"} Sep 30 09:59:18 crc kubenswrapper[4970]: I0930 09:59:18.033932 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"f1750a155d153333a3a3c393961cb186441400d06751b887781df4f7f23a9b72"} Sep 30 09:59:18 crc kubenswrapper[4970]: I0930 09:59:18.033947 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"d7e3d0f16efc16438feb1f4165360cd4d5c73d4daacfaf81e42aa90c48392673"} Sep 30 09:59:18 crc kubenswrapper[4970]: I0930 09:59:18.033960 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"6c1abe04840d41870a876ec59954ab63811c67bef256e724de91f383154d59b8"} Sep 30 09:59:18 crc kubenswrapper[4970]: I0930 09:59:18.033976 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"b86be0f01f9cd7e11dffb6a50cc99a868a8ce9d5937efa5d86b58b0813053a52"} Sep 30 09:59:19 crc kubenswrapper[4970]: I0930 09:59:19.046164 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-z8ds7" event={"ID":"2c47cc30-570a-4a61-8025-f1f12067fa0b","Type":"ContainerStarted","Data":"f73ca1d0af4bd5033eb12252f1fc6bf88ad6ac61b91d81e2cbcc9d1b64987771"} Sep 30 09:59:19 crc kubenswrapper[4970]: I0930 09:59:19.046411 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:19 crc kubenswrapper[4970]: I0930 09:59:19.090617 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-z8ds7" podStartSLOduration=6.433725394 podStartE2EDuration="15.090597391s" podCreationTimestamp="2025-09-30 09:59:04 +0000 UTC" firstStartedPulling="2025-09-30 09:59:05.335366063 +0000 UTC m=+758.407216997" lastFinishedPulling="2025-09-30 09:59:13.99223806 +0000 UTC m=+767.064088994" observedRunningTime="2025-09-30 09:59:19.086953192 +0000 UTC m=+772.158804146" watchObservedRunningTime="2025-09-30 09:59:19.090597391 +0000 UTC m=+772.162448325" Sep 30 09:59:20 crc kubenswrapper[4970]: I0930 09:59:20.137979 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:20 crc kubenswrapper[4970]: I0930 09:59:20.183823 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.560811 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-knmz5"] Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.562425 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.566448 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.566526 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-k977z" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.566458 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.578290 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-knmz5"] Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.631725 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw7gt\" (UniqueName: \"kubernetes.io/projected/7920bd73-b686-43c9-a9f1-01c6f0ea8eca-kube-api-access-rw7gt\") pod \"openstack-operator-index-knmz5\" (UID: \"7920bd73-b686-43c9-a9f1-01c6f0ea8eca\") " pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.733489 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw7gt\" (UniqueName: \"kubernetes.io/projected/7920bd73-b686-43c9-a9f1-01c6f0ea8eca-kube-api-access-rw7gt\") pod \"openstack-operator-index-knmz5\" (UID: \"7920bd73-b686-43c9-a9f1-01c6f0ea8eca\") " pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.759437 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw7gt\" (UniqueName: \"kubernetes.io/projected/7920bd73-b686-43c9-a9f1-01c6f0ea8eca-kube-api-access-rw7gt\") pod \"openstack-operator-index-knmz5\" (UID: \"7920bd73-b686-43c9-a9f1-01c6f0ea8eca\") " pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:22 crc kubenswrapper[4970]: I0930 09:59:22.893793 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:23 crc kubenswrapper[4970]: I0930 09:59:23.368097 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-knmz5"] Sep 30 09:59:24 crc kubenswrapper[4970]: I0930 09:59:24.079464 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-knmz5" event={"ID":"7920bd73-b686-43c9-a9f1-01c6f0ea8eca","Type":"ContainerStarted","Data":"c4eb1b898a306264ba1639da05b7e8cba23ab3a18ca95563c5ae2bc2e87d6d79"} Sep 30 09:59:25 crc kubenswrapper[4970]: I0930 09:59:25.149073 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-p7rc4" Sep 30 09:59:27 crc kubenswrapper[4970]: I0930 09:59:27.103897 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-knmz5" event={"ID":"7920bd73-b686-43c9-a9f1-01c6f0ea8eca","Type":"ContainerStarted","Data":"cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca"} Sep 30 09:59:27 crc kubenswrapper[4970]: I0930 09:59:27.756255 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-knmz5" podStartSLOduration=3.154195881 podStartE2EDuration="5.756233672s" podCreationTimestamp="2025-09-30 09:59:22 +0000 UTC" firstStartedPulling="2025-09-30 09:59:23.390621178 +0000 UTC m=+776.462472112" lastFinishedPulling="2025-09-30 09:59:25.992658969 +0000 UTC m=+779.064509903" observedRunningTime="2025-09-30 09:59:27.137654778 +0000 UTC m=+780.209505712" watchObservedRunningTime="2025-09-30 09:59:27.756233672 +0000 UTC m=+780.828084606" Sep 30 09:59:27 crc kubenswrapper[4970]: I0930 09:59:27.758597 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-knmz5"] Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.370681 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-xv9kb"] Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.372316 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.378646 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xv9kb"] Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.424704 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrd4\" (UniqueName: \"kubernetes.io/projected/8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef-kube-api-access-bqrd4\") pod \"openstack-operator-index-xv9kb\" (UID: \"8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef\") " pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.526518 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrd4\" (UniqueName: \"kubernetes.io/projected/8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef-kube-api-access-bqrd4\") pod \"openstack-operator-index-xv9kb\" (UID: \"8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef\") " pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.547197 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrd4\" (UniqueName: \"kubernetes.io/projected/8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef-kube-api-access-bqrd4\") pod \"openstack-operator-index-xv9kb\" (UID: \"8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef\") " pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:28 crc kubenswrapper[4970]: I0930 09:59:28.693240 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:29 crc kubenswrapper[4970]: I0930 09:59:29.116176 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-knmz5" podUID="7920bd73-b686-43c9-a9f1-01c6f0ea8eca" containerName="registry-server" containerID="cri-o://cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca" gracePeriod=2 Sep 30 09:59:29 crc kubenswrapper[4970]: I0930 09:59:29.143762 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-xv9kb"] Sep 30 09:59:29 crc kubenswrapper[4970]: I0930 09:59:29.642512 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:29 crc kubenswrapper[4970]: I0930 09:59:29.686565 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw7gt\" (UniqueName: \"kubernetes.io/projected/7920bd73-b686-43c9-a9f1-01c6f0ea8eca-kube-api-access-rw7gt\") pod \"7920bd73-b686-43c9-a9f1-01c6f0ea8eca\" (UID: \"7920bd73-b686-43c9-a9f1-01c6f0ea8eca\") " Sep 30 09:59:29 crc kubenswrapper[4970]: I0930 09:59:29.693300 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7920bd73-b686-43c9-a9f1-01c6f0ea8eca-kube-api-access-rw7gt" (OuterVolumeSpecName: "kube-api-access-rw7gt") pod "7920bd73-b686-43c9-a9f1-01c6f0ea8eca" (UID: "7920bd73-b686-43c9-a9f1-01c6f0ea8eca"). InnerVolumeSpecName "kube-api-access-rw7gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:59:29 crc kubenswrapper[4970]: I0930 09:59:29.788185 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rw7gt\" (UniqueName: \"kubernetes.io/projected/7920bd73-b686-43c9-a9f1-01c6f0ea8eca-kube-api-access-rw7gt\") on node \"crc\" DevicePath \"\"" Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.122714 4970 generic.go:334] "Generic (PLEG): container finished" podID="7920bd73-b686-43c9-a9f1-01c6f0ea8eca" containerID="cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca" exitCode=0 Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.122792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-knmz5" event={"ID":"7920bd73-b686-43c9-a9f1-01c6f0ea8eca","Type":"ContainerDied","Data":"cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca"} Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.122820 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-knmz5" Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.123421 4970 scope.go:117] "RemoveContainer" containerID="cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca" Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.123397 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-knmz5" event={"ID":"7920bd73-b686-43c9-a9f1-01c6f0ea8eca","Type":"ContainerDied","Data":"c4eb1b898a306264ba1639da05b7e8cba23ab3a18ca95563c5ae2bc2e87d6d79"} Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.124955 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xv9kb" event={"ID":"8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef","Type":"ContainerStarted","Data":"1c0911805ae514f202f8114a8d837311866760e432b0289cb5e90df2211c706b"} Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.125018 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-xv9kb" event={"ID":"8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef","Type":"ContainerStarted","Data":"2f7b579c13e8285a7ff5766e19e9cefe2a41ea2594e2b6ef2dc40c99d6cfec88"} Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.144091 4970 scope.go:117] "RemoveContainer" containerID="cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca" Sep 30 09:59:30 crc kubenswrapper[4970]: E0930 09:59:30.144559 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca\": container with ID starting with cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca not found: ID does not exist" containerID="cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca" Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.144619 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca"} err="failed to get container status \"cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca\": rpc error: code = NotFound desc = could not find container \"cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca\": container with ID starting with cd1bdcc8c687ad23b57ccdcd28587fae5ed0dada7930c5a1959200458d891fca not found: ID does not exist" Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.152958 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-xv9kb" podStartSLOduration=2.102054773 podStartE2EDuration="2.152937312s" podCreationTimestamp="2025-09-30 09:59:28 +0000 UTC" firstStartedPulling="2025-09-30 09:59:29.16425757 +0000 UTC m=+782.236108524" lastFinishedPulling="2025-09-30 09:59:29.215140129 +0000 UTC m=+782.286991063" observedRunningTime="2025-09-30 09:59:30.151495883 +0000 UTC m=+783.223346827" watchObservedRunningTime="2025-09-30 09:59:30.152937312 +0000 UTC m=+783.224788246" Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.168170 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-knmz5"] Sep 30 09:59:30 crc kubenswrapper[4970]: I0930 09:59:30.172805 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-knmz5"] Sep 30 09:59:31 crc kubenswrapper[4970]: I0930 09:59:31.678552 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7920bd73-b686-43c9-a9f1-01c6f0ea8eca" path="/var/lib/kubelet/pods/7920bd73-b686-43c9-a9f1-01c6f0ea8eca/volumes" Sep 30 09:59:34 crc kubenswrapper[4970]: I0930 09:59:34.822334 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 09:59:34 crc kubenswrapper[4970]: I0930 09:59:34.822921 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 09:59:35 crc kubenswrapper[4970]: I0930 09:59:35.142052 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-z8ds7" Sep 30 09:59:38 crc kubenswrapper[4970]: I0930 09:59:38.693402 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:38 crc kubenswrapper[4970]: I0930 09:59:38.693877 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:38 crc kubenswrapper[4970]: I0930 09:59:38.726255 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:39 crc kubenswrapper[4970]: I0930 09:59:39.217842 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-xv9kb" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.826371 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc"] Sep 30 09:59:40 crc kubenswrapper[4970]: E0930 09:59:40.835576 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7920bd73-b686-43c9-a9f1-01c6f0ea8eca" containerName="registry-server" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.835627 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7920bd73-b686-43c9-a9f1-01c6f0ea8eca" containerName="registry-server" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.835827 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7920bd73-b686-43c9-a9f1-01c6f0ea8eca" containerName="registry-server" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.837281 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.846215 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-r6qnk" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.849841 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc"] Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.959858 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-bundle\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.959941 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdt7b\" (UniqueName: \"kubernetes.io/projected/ae66629d-a629-4053-89ac-0b2bc9fc9407-kube-api-access-qdt7b\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:40 crc kubenswrapper[4970]: I0930 09:59:40.960017 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-util\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.062071 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-bundle\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.062159 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdt7b\" (UniqueName: \"kubernetes.io/projected/ae66629d-a629-4053-89ac-0b2bc9fc9407-kube-api-access-qdt7b\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.062205 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-util\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.063180 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-bundle\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.063238 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-util\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.085088 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdt7b\" (UniqueName: \"kubernetes.io/projected/ae66629d-a629-4053-89ac-0b2bc9fc9407-kube-api-access-qdt7b\") pod \"4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.218766 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:41 crc kubenswrapper[4970]: I0930 09:59:41.685946 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc"] Sep 30 09:59:41 crc kubenswrapper[4970]: W0930 09:59:41.692079 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae66629d_a629_4053_89ac_0b2bc9fc9407.slice/crio-4b6b4c32cbdd024e8660717c44d39415d7a22b309d97f357eaef6c9510ddc1fc WatchSource:0}: Error finding container 4b6b4c32cbdd024e8660717c44d39415d7a22b309d97f357eaef6c9510ddc1fc: Status 404 returned error can't find the container with id 4b6b4c32cbdd024e8660717c44d39415d7a22b309d97f357eaef6c9510ddc1fc Sep 30 09:59:42 crc kubenswrapper[4970]: I0930 09:59:42.211697 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerID="2a8270cdefc71941c17d0a8bcf19e4da02897b169d7a76a0104a8cdb702d2fe0" exitCode=0 Sep 30 09:59:42 crc kubenswrapper[4970]: I0930 09:59:42.211788 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" event={"ID":"ae66629d-a629-4053-89ac-0b2bc9fc9407","Type":"ContainerDied","Data":"2a8270cdefc71941c17d0a8bcf19e4da02897b169d7a76a0104a8cdb702d2fe0"} Sep 30 09:59:42 crc kubenswrapper[4970]: I0930 09:59:42.212221 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" event={"ID":"ae66629d-a629-4053-89ac-0b2bc9fc9407","Type":"ContainerStarted","Data":"4b6b4c32cbdd024e8660717c44d39415d7a22b309d97f357eaef6c9510ddc1fc"} Sep 30 09:59:43 crc kubenswrapper[4970]: I0930 09:59:43.222876 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerID="c979c11b82027ccd66f3023141482489d8add544a32b97d5afc675cf38a1432c" exitCode=0 Sep 30 09:59:43 crc kubenswrapper[4970]: I0930 09:59:43.223017 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" event={"ID":"ae66629d-a629-4053-89ac-0b2bc9fc9407","Type":"ContainerDied","Data":"c979c11b82027ccd66f3023141482489d8add544a32b97d5afc675cf38a1432c"} Sep 30 09:59:44 crc kubenswrapper[4970]: I0930 09:59:44.234024 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerID="796465c759c60c4080abd2247eea908ecce8325d3eced6f973af41cbc5174cc4" exitCode=0 Sep 30 09:59:44 crc kubenswrapper[4970]: I0930 09:59:44.234073 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" event={"ID":"ae66629d-a629-4053-89ac-0b2bc9fc9407","Type":"ContainerDied","Data":"796465c759c60c4080abd2247eea908ecce8325d3eced6f973af41cbc5174cc4"} Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.622080 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.730504 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-bundle\") pod \"ae66629d-a629-4053-89ac-0b2bc9fc9407\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.730551 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdt7b\" (UniqueName: \"kubernetes.io/projected/ae66629d-a629-4053-89ac-0b2bc9fc9407-kube-api-access-qdt7b\") pod \"ae66629d-a629-4053-89ac-0b2bc9fc9407\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.730582 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-util\") pod \"ae66629d-a629-4053-89ac-0b2bc9fc9407\" (UID: \"ae66629d-a629-4053-89ac-0b2bc9fc9407\") " Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.735542 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-bundle" (OuterVolumeSpecName: "bundle") pod "ae66629d-a629-4053-89ac-0b2bc9fc9407" (UID: "ae66629d-a629-4053-89ac-0b2bc9fc9407"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.737186 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae66629d-a629-4053-89ac-0b2bc9fc9407-kube-api-access-qdt7b" (OuterVolumeSpecName: "kube-api-access-qdt7b") pod "ae66629d-a629-4053-89ac-0b2bc9fc9407" (UID: "ae66629d-a629-4053-89ac-0b2bc9fc9407"). InnerVolumeSpecName "kube-api-access-qdt7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.750455 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-util" (OuterVolumeSpecName: "util") pod "ae66629d-a629-4053-89ac-0b2bc9fc9407" (UID: "ae66629d-a629-4053-89ac-0b2bc9fc9407"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.832577 4970 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-util\") on node \"crc\" DevicePath \"\"" Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.832621 4970 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ae66629d-a629-4053-89ac-0b2bc9fc9407-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 09:59:45 crc kubenswrapper[4970]: I0930 09:59:45.832633 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdt7b\" (UniqueName: \"kubernetes.io/projected/ae66629d-a629-4053-89ac-0b2bc9fc9407-kube-api-access-qdt7b\") on node \"crc\" DevicePath \"\"" Sep 30 09:59:46 crc kubenswrapper[4970]: I0930 09:59:46.251757 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" event={"ID":"ae66629d-a629-4053-89ac-0b2bc9fc9407","Type":"ContainerDied","Data":"4b6b4c32cbdd024e8660717c44d39415d7a22b309d97f357eaef6c9510ddc1fc"} Sep 30 09:59:46 crc kubenswrapper[4970]: I0930 09:59:46.252226 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b6b4c32cbdd024e8660717c44d39415d7a22b309d97f357eaef6c9510ddc1fc" Sep 30 09:59:46 crc kubenswrapper[4970]: I0930 09:59:46.251829 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.544153 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2"] Sep 30 09:59:50 crc kubenswrapper[4970]: E0930 09:59:50.544795 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="util" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.544809 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="util" Sep 30 09:59:50 crc kubenswrapper[4970]: E0930 09:59:50.544840 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="extract" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.544847 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="extract" Sep 30 09:59:50 crc kubenswrapper[4970]: E0930 09:59:50.544855 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="pull" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.544862 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="pull" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.545013 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae66629d-a629-4053-89ac-0b2bc9fc9407" containerName="extract" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.545775 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.550132 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-5nph9" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.605839 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7phk\" (UniqueName: \"kubernetes.io/projected/814b0f3a-2bb6-45ba-a6c2-f798b43d4494-kube-api-access-c7phk\") pod \"openstack-operator-controller-operator-7d8c4cd779-pt9l2\" (UID: \"814b0f3a-2bb6-45ba-a6c2-f798b43d4494\") " pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.647360 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2"] Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.706972 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7phk\" (UniqueName: \"kubernetes.io/projected/814b0f3a-2bb6-45ba-a6c2-f798b43d4494-kube-api-access-c7phk\") pod \"openstack-operator-controller-operator-7d8c4cd779-pt9l2\" (UID: \"814b0f3a-2bb6-45ba-a6c2-f798b43d4494\") " pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.728843 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7phk\" (UniqueName: \"kubernetes.io/projected/814b0f3a-2bb6-45ba-a6c2-f798b43d4494-kube-api-access-c7phk\") pod \"openstack-operator-controller-operator-7d8c4cd779-pt9l2\" (UID: \"814b0f3a-2bb6-45ba-a6c2-f798b43d4494\") " pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 09:59:50 crc kubenswrapper[4970]: I0930 09:59:50.865482 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 09:59:51 crc kubenswrapper[4970]: I0930 09:59:51.333530 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2"] Sep 30 09:59:52 crc kubenswrapper[4970]: I0930 09:59:52.291054 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" event={"ID":"814b0f3a-2bb6-45ba-a6c2-f798b43d4494","Type":"ContainerStarted","Data":"12f7981680652ecfa7529adae767cb6b849bf0c950227fd8688c0ccfa376ee73"} Sep 30 09:59:55 crc kubenswrapper[4970]: I0930 09:59:55.312926 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" event={"ID":"814b0f3a-2bb6-45ba-a6c2-f798b43d4494","Type":"ContainerStarted","Data":"b8c72bf4479f952958e9ed0941f9ad6656667e19e374e4d27e0c4eccb2559f67"} Sep 30 09:59:57 crc kubenswrapper[4970]: I0930 09:59:57.965980 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-d98mq"] Sep 30 09:59:57 crc kubenswrapper[4970]: I0930 09:59:57.968103 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:57 crc kubenswrapper[4970]: I0930 09:59:57.979368 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d98mq"] Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.049100 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxx7m\" (UniqueName: \"kubernetes.io/projected/899bf7ac-6cc0-423b-9f77-5555ad6fe624-kube-api-access-xxx7m\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.049429 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-catalog-content\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.049493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-utilities\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.151543 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-catalog-content\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.151638 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-utilities\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.151716 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxx7m\" (UniqueName: \"kubernetes.io/projected/899bf7ac-6cc0-423b-9f77-5555ad6fe624-kube-api-access-xxx7m\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.152362 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-utilities\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.152732 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-catalog-content\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.182668 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxx7m\" (UniqueName: \"kubernetes.io/projected/899bf7ac-6cc0-423b-9f77-5555ad6fe624-kube-api-access-xxx7m\") pod \"community-operators-d98mq\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.287013 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d98mq" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.339394 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" event={"ID":"814b0f3a-2bb6-45ba-a6c2-f798b43d4494","Type":"ContainerStarted","Data":"b07343e059e08dfc4a6d2f248c989dca71d99e3179fff69df9f95a9a6d4cd9e7"} Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.339823 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.382612 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" podStartSLOduration=2.479672455 podStartE2EDuration="8.382589884s" podCreationTimestamp="2025-09-30 09:59:50 +0000 UTC" firstStartedPulling="2025-09-30 09:59:51.341572034 +0000 UTC m=+804.413422988" lastFinishedPulling="2025-09-30 09:59:57.244489493 +0000 UTC m=+810.316340417" observedRunningTime="2025-09-30 09:59:58.377831965 +0000 UTC m=+811.449682929" watchObservedRunningTime="2025-09-30 09:59:58.382589884 +0000 UTC m=+811.454440818" Sep 30 09:59:58 crc kubenswrapper[4970]: I0930 09:59:58.764332 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-d98mq"] Sep 30 09:59:58 crc kubenswrapper[4970]: W0930 09:59:58.764875 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899bf7ac_6cc0_423b_9f77_5555ad6fe624.slice/crio-b26112b0919b453c69ebcb5992806ce196f70c8f0aca905510dd0a1b95c7197c WatchSource:0}: Error finding container b26112b0919b453c69ebcb5992806ce196f70c8f0aca905510dd0a1b95c7197c: Status 404 returned error can't find the container with id b26112b0919b453c69ebcb5992806ce196f70c8f0aca905510dd0a1b95c7197c Sep 30 09:59:59 crc kubenswrapper[4970]: I0930 09:59:59.347163 4970 generic.go:334] "Generic (PLEG): container finished" podID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerID="c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd" exitCode=0 Sep 30 09:59:59 crc kubenswrapper[4970]: I0930 09:59:59.347257 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d98mq" event={"ID":"899bf7ac-6cc0-423b-9f77-5555ad6fe624","Type":"ContainerDied","Data":"c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd"} Sep 30 09:59:59 crc kubenswrapper[4970]: I0930 09:59:59.347718 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d98mq" event={"ID":"899bf7ac-6cc0-423b-9f77-5555ad6fe624","Type":"ContainerStarted","Data":"b26112b0919b453c69ebcb5992806ce196f70c8f0aca905510dd0a1b95c7197c"} Sep 30 09:59:59 crc kubenswrapper[4970]: I0930 09:59:59.350633 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-7d8c4cd779-pt9l2" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.136282 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr"] Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.137168 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.144485 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.147302 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.154555 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr"] Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.180338 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-config-volume\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.180436 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-secret-volume\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.180481 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kz9q\" (UniqueName: \"kubernetes.io/projected/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-kube-api-access-6kz9q\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.281578 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-secret-volume\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.281633 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kz9q\" (UniqueName: \"kubernetes.io/projected/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-kube-api-access-6kz9q\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.281718 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-config-volume\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.282857 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-config-volume\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.297336 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-secret-volume\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.302197 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kz9q\" (UniqueName: \"kubernetes.io/projected/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-kube-api-access-6kz9q\") pod \"collect-profiles-29320440-qpqkr\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.456335 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:00 crc kubenswrapper[4970]: I0930 10:00:00.921616 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr"] Sep 30 10:00:00 crc kubenswrapper[4970]: W0930 10:00:00.930332 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab1986e5_82d1_4ad7_b0db_f3bd67c590b5.slice/crio-043144609b4e6c55e542b676edd7991246b0003e2bc7bf7785323b0eaa9725d8 WatchSource:0}: Error finding container 043144609b4e6c55e542b676edd7991246b0003e2bc7bf7785323b0eaa9725d8: Status 404 returned error can't find the container with id 043144609b4e6c55e542b676edd7991246b0003e2bc7bf7785323b0eaa9725d8 Sep 30 10:00:01 crc kubenswrapper[4970]: I0930 10:00:01.362393 4970 generic.go:334] "Generic (PLEG): container finished" podID="ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" containerID="5e9db830f3e303016aa8ac1d254c49a6bc1112e347878fa38956ceadddab9092" exitCode=0 Sep 30 10:00:01 crc kubenswrapper[4970]: I0930 10:00:01.362450 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" event={"ID":"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5","Type":"ContainerDied","Data":"5e9db830f3e303016aa8ac1d254c49a6bc1112e347878fa38956ceadddab9092"} Sep 30 10:00:01 crc kubenswrapper[4970]: I0930 10:00:01.362545 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" event={"ID":"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5","Type":"ContainerStarted","Data":"043144609b4e6c55e542b676edd7991246b0003e2bc7bf7785323b0eaa9725d8"} Sep 30 10:00:01 crc kubenswrapper[4970]: I0930 10:00:01.365092 4970 generic.go:334] "Generic (PLEG): container finished" podID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerID="1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a" exitCode=0 Sep 30 10:00:01 crc kubenswrapper[4970]: I0930 10:00:01.365139 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d98mq" event={"ID":"899bf7ac-6cc0-423b-9f77-5555ad6fe624","Type":"ContainerDied","Data":"1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a"} Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.377504 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d98mq" event={"ID":"899bf7ac-6cc0-423b-9f77-5555ad6fe624","Type":"ContainerStarted","Data":"0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d"} Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.720880 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.825566 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-secret-volume\") pod \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.826101 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-config-volume\") pod \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.826347 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6kz9q\" (UniqueName: \"kubernetes.io/projected/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-kube-api-access-6kz9q\") pod \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\" (UID: \"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5\") " Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.826660 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" (UID: "ab1986e5-82d1-4ad7-b0db-f3bd67c590b5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.826969 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.834638 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-kube-api-access-6kz9q" (OuterVolumeSpecName: "kube-api-access-6kz9q") pod "ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" (UID: "ab1986e5-82d1-4ad7-b0db-f3bd67c590b5"). InnerVolumeSpecName "kube-api-access-6kz9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.846054 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" (UID: "ab1986e5-82d1-4ad7-b0db-f3bd67c590b5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.928112 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6kz9q\" (UniqueName: \"kubernetes.io/projected/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-kube-api-access-6kz9q\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:02 crc kubenswrapper[4970]: I0930 10:00:02.928159 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:03 crc kubenswrapper[4970]: I0930 10:00:03.386819 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" event={"ID":"ab1986e5-82d1-4ad7-b0db-f3bd67c590b5","Type":"ContainerDied","Data":"043144609b4e6c55e542b676edd7991246b0003e2bc7bf7785323b0eaa9725d8"} Sep 30 10:00:03 crc kubenswrapper[4970]: I0930 10:00:03.386895 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043144609b4e6c55e542b676edd7991246b0003e2bc7bf7785323b0eaa9725d8" Sep 30 10:00:03 crc kubenswrapper[4970]: I0930 10:00:03.386842 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr" Sep 30 10:00:03 crc kubenswrapper[4970]: I0930 10:00:03.415088 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-d98mq" podStartSLOduration=3.593318225 podStartE2EDuration="6.415063354s" podCreationTimestamp="2025-09-30 09:59:57 +0000 UTC" firstStartedPulling="2025-09-30 09:59:59.348614534 +0000 UTC m=+812.420465468" lastFinishedPulling="2025-09-30 10:00:02.170359663 +0000 UTC m=+815.242210597" observedRunningTime="2025-09-30 10:00:03.411734124 +0000 UTC m=+816.483585058" watchObservedRunningTime="2025-09-30 10:00:03.415063354 +0000 UTC m=+816.486914298" Sep 30 10:00:04 crc kubenswrapper[4970]: I0930 10:00:04.821708 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:00:04 crc kubenswrapper[4970]: I0930 10:00:04.822034 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.415952 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-885ws"] Sep 30 10:00:06 crc kubenswrapper[4970]: E0930 10:00:06.416310 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" containerName="collect-profiles" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.416332 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" containerName="collect-profiles" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.416511 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" containerName="collect-profiles" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.417550 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.433761 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-885ws"] Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.477181 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prr2p\" (UniqueName: \"kubernetes.io/projected/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-kube-api-access-prr2p\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.477770 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-catalog-content\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.477927 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-utilities\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.579489 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prr2p\" (UniqueName: \"kubernetes.io/projected/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-kube-api-access-prr2p\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.579589 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-catalog-content\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.579642 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-utilities\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.580219 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-utilities\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.580451 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-catalog-content\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.601794 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prr2p\" (UniqueName: \"kubernetes.io/projected/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-kube-api-access-prr2p\") pod \"redhat-operators-885ws\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:06 crc kubenswrapper[4970]: I0930 10:00:06.736317 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:07 crc kubenswrapper[4970]: I0930 10:00:07.291601 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-885ws"] Sep 30 10:00:07 crc kubenswrapper[4970]: I0930 10:00:07.415467 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-885ws" event={"ID":"ec858bf1-5d08-410d-b66e-4bb2e6c240b9","Type":"ContainerStarted","Data":"c242f7f5b1098495f0ccf92d1be453aa25c13e6df44b3607725564cb6ffdbd78"} Sep 30 10:00:08 crc kubenswrapper[4970]: I0930 10:00:08.288166 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-d98mq" Sep 30 10:00:08 crc kubenswrapper[4970]: I0930 10:00:08.291405 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-d98mq" Sep 30 10:00:08 crc kubenswrapper[4970]: I0930 10:00:08.348322 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-d98mq" Sep 30 10:00:08 crc kubenswrapper[4970]: I0930 10:00:08.436072 4970 generic.go:334] "Generic (PLEG): container finished" podID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerID="ec7c9b0b972845f0ef098cbf68087a868dd52335ff77a84c5ec22e55769507ff" exitCode=0 Sep 30 10:00:08 crc kubenswrapper[4970]: I0930 10:00:08.436313 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-885ws" event={"ID":"ec858bf1-5d08-410d-b66e-4bb2e6c240b9","Type":"ContainerDied","Data":"ec7c9b0b972845f0ef098cbf68087a868dd52335ff77a84c5ec22e55769507ff"} Sep 30 10:00:08 crc kubenswrapper[4970]: I0930 10:00:08.480588 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-d98mq" Sep 30 10:00:10 crc kubenswrapper[4970]: I0930 10:00:10.449830 4970 generic.go:334] "Generic (PLEG): container finished" podID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerID="731aecc62e0f796ce8796e9c160dc955298865605275b5704f47bada49f0e999" exitCode=0 Sep 30 10:00:10 crc kubenswrapper[4970]: I0930 10:00:10.449950 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-885ws" event={"ID":"ec858bf1-5d08-410d-b66e-4bb2e6c240b9","Type":"ContainerDied","Data":"731aecc62e0f796ce8796e9c160dc955298865605275b5704f47bada49f0e999"} Sep 30 10:00:10 crc kubenswrapper[4970]: I0930 10:00:10.590831 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d98mq"] Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.193409 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gbnzq"] Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.195530 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.211343 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbnzq"] Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.368517 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2shz\" (UniqueName: \"kubernetes.io/projected/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-kube-api-access-n2shz\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.368597 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-catalog-content\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.368619 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-utilities\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.457037 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-d98mq" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="registry-server" containerID="cri-o://0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d" gracePeriod=2 Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.469709 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2shz\" (UniqueName: \"kubernetes.io/projected/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-kube-api-access-n2shz\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.469787 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-catalog-content\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.469810 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-utilities\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.470323 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-catalog-content\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.470354 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-utilities\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.492061 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2shz\" (UniqueName: \"kubernetes.io/projected/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-kube-api-access-n2shz\") pod \"redhat-marketplace-gbnzq\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:11 crc kubenswrapper[4970]: I0930 10:00:11.514351 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.042810 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbnzq"] Sep 30 10:00:12 crc kubenswrapper[4970]: W0930 10:00:12.054739 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b7d0772_1c7d_443b_a9f7_fa64461d84bc.slice/crio-b029d1ff22f2cb5726cc218e243437bf04b2c9c3fe9f1ea39890061227c03f61 WatchSource:0}: Error finding container b029d1ff22f2cb5726cc218e243437bf04b2c9c3fe9f1ea39890061227c03f61: Status 404 returned error can't find the container with id b029d1ff22f2cb5726cc218e243437bf04b2c9c3fe9f1ea39890061227c03f61 Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.059270 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d98mq" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.180590 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-utilities\") pod \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.180675 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxx7m\" (UniqueName: \"kubernetes.io/projected/899bf7ac-6cc0-423b-9f77-5555ad6fe624-kube-api-access-xxx7m\") pod \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.180746 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-catalog-content\") pod \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\" (UID: \"899bf7ac-6cc0-423b-9f77-5555ad6fe624\") " Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.181540 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-utilities" (OuterVolumeSpecName: "utilities") pod "899bf7ac-6cc0-423b-9f77-5555ad6fe624" (UID: "899bf7ac-6cc0-423b-9f77-5555ad6fe624"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.188358 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899bf7ac-6cc0-423b-9f77-5555ad6fe624-kube-api-access-xxx7m" (OuterVolumeSpecName: "kube-api-access-xxx7m") pod "899bf7ac-6cc0-423b-9f77-5555ad6fe624" (UID: "899bf7ac-6cc0-423b-9f77-5555ad6fe624"). InnerVolumeSpecName "kube-api-access-xxx7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.257332 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "899bf7ac-6cc0-423b-9f77-5555ad6fe624" (UID: "899bf7ac-6cc0-423b-9f77-5555ad6fe624"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.282200 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.282240 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxx7m\" (UniqueName: \"kubernetes.io/projected/899bf7ac-6cc0-423b-9f77-5555ad6fe624-kube-api-access-xxx7m\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.282252 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bf7ac-6cc0-423b-9f77-5555ad6fe624-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.465963 4970 generic.go:334] "Generic (PLEG): container finished" podID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerID="06c172fcc432ebf572c0f41c4f50ac5f865795b08631ff1a66f8c82d3729daf7" exitCode=0 Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.466086 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbnzq" event={"ID":"2b7d0772-1c7d-443b-a9f7-fa64461d84bc","Type":"ContainerDied","Data":"06c172fcc432ebf572c0f41c4f50ac5f865795b08631ff1a66f8c82d3729daf7"} Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.467512 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbnzq" event={"ID":"2b7d0772-1c7d-443b-a9f7-fa64461d84bc","Type":"ContainerStarted","Data":"b029d1ff22f2cb5726cc218e243437bf04b2c9c3fe9f1ea39890061227c03f61"} Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.473029 4970 generic.go:334] "Generic (PLEG): container finished" podID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerID="0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d" exitCode=0 Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.473112 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d98mq" event={"ID":"899bf7ac-6cc0-423b-9f77-5555ad6fe624","Type":"ContainerDied","Data":"0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d"} Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.473151 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-d98mq" event={"ID":"899bf7ac-6cc0-423b-9f77-5555ad6fe624","Type":"ContainerDied","Data":"b26112b0919b453c69ebcb5992806ce196f70c8f0aca905510dd0a1b95c7197c"} Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.473182 4970 scope.go:117] "RemoveContainer" containerID="0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.473219 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-d98mq" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.476909 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-885ws" event={"ID":"ec858bf1-5d08-410d-b66e-4bb2e6c240b9","Type":"ContainerStarted","Data":"3b31d8a9a5588c03aa7590d2962884308ea168abe7d4d5fcd7e685c48d9ea2b0"} Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.496477 4970 scope.go:117] "RemoveContainer" containerID="1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.516660 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-d98mq"] Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.522267 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-d98mq"] Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.539692 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-885ws" podStartSLOduration=3.69059865 podStartE2EDuration="6.539667519s" podCreationTimestamp="2025-09-30 10:00:06 +0000 UTC" firstStartedPulling="2025-09-30 10:00:08.439504316 +0000 UTC m=+821.511355250" lastFinishedPulling="2025-09-30 10:00:11.288573185 +0000 UTC m=+824.360424119" observedRunningTime="2025-09-30 10:00:12.530183622 +0000 UTC m=+825.602034566" watchObservedRunningTime="2025-09-30 10:00:12.539667519 +0000 UTC m=+825.611518453" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.542824 4970 scope.go:117] "RemoveContainer" containerID="c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.575226 4970 scope.go:117] "RemoveContainer" containerID="0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d" Sep 30 10:00:12 crc kubenswrapper[4970]: E0930 10:00:12.575795 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d\": container with ID starting with 0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d not found: ID does not exist" containerID="0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.575854 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d"} err="failed to get container status \"0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d\": rpc error: code = NotFound desc = could not find container \"0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d\": container with ID starting with 0e89bcb46db9938fd6d0a0ba241672f9e97fa73ec154f38b2c4d857ff086da9d not found: ID does not exist" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.575895 4970 scope.go:117] "RemoveContainer" containerID="1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a" Sep 30 10:00:12 crc kubenswrapper[4970]: E0930 10:00:12.576377 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a\": container with ID starting with 1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a not found: ID does not exist" containerID="1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.576513 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a"} err="failed to get container status \"1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a\": rpc error: code = NotFound desc = could not find container \"1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a\": container with ID starting with 1abee814ba6565bba4dc51b30a5ef3953cfd616d042df136e4494f98a2a70c0a not found: ID does not exist" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.576613 4970 scope.go:117] "RemoveContainer" containerID="c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd" Sep 30 10:00:12 crc kubenswrapper[4970]: E0930 10:00:12.577052 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd\": container with ID starting with c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd not found: ID does not exist" containerID="c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd" Sep 30 10:00:12 crc kubenswrapper[4970]: I0930 10:00:12.577092 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd"} err="failed to get container status \"c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd\": rpc error: code = NotFound desc = could not find container \"c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd\": container with ID starting with c0e306c975187a44c8f485e6d7233923a9ed901a285efb3b07cde02fcd5fd2dd not found: ID does not exist" Sep 30 10:00:13 crc kubenswrapper[4970]: I0930 10:00:13.682444 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" path="/var/lib/kubelet/pods/899bf7ac-6cc0-423b-9f77-5555ad6fe624/volumes" Sep 30 10:00:14 crc kubenswrapper[4970]: I0930 10:00:14.506283 4970 generic.go:334] "Generic (PLEG): container finished" podID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerID="2ac34a97a5d06215e6048258c2022004343d6191b6bff097c3f525ce72d5e140" exitCode=0 Sep 30 10:00:14 crc kubenswrapper[4970]: I0930 10:00:14.506334 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbnzq" event={"ID":"2b7d0772-1c7d-443b-a9f7-fa64461d84bc","Type":"ContainerDied","Data":"2ac34a97a5d06215e6048258c2022004343d6191b6bff097c3f525ce72d5e140"} Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.880158 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr"] Sep 30 10:00:15 crc kubenswrapper[4970]: E0930 10:00:15.880909 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="extract-utilities" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.880927 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="extract-utilities" Sep 30 10:00:15 crc kubenswrapper[4970]: E0930 10:00:15.880939 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="extract-content" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.880947 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="extract-content" Sep 30 10:00:15 crc kubenswrapper[4970]: E0930 10:00:15.880959 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="registry-server" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.880967 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="registry-server" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.881150 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bf7ac-6cc0-423b-9f77-5555ad6fe624" containerName="registry-server" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.881977 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.886882 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-lnq7h" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.889370 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj"] Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.890928 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.895820 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-v7v7r" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.912512 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj"] Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.926739 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr"] Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.940599 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph"] Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.941772 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.945507 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-blch6" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.963697 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph"] Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.982370 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj"] Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.983992 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:00:15 crc kubenswrapper[4970]: I0930 10:00:15.994728 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-62hnc" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.022704 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.043081 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.044411 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.047991 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97f9\" (UniqueName: \"kubernetes.io/projected/c9a40f4a-1de7-45da-91e9-4f11637452b2-kube-api-access-s97f9\") pod \"designate-operator-controller-manager-84f4f7b77b-pf2ph\" (UID: \"c9a40f4a-1de7-45da-91e9-4f11637452b2\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.048076 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx56k\" (UniqueName: \"kubernetes.io/projected/7131ae21-9827-4028-9841-fbc480e7b938-kube-api-access-zx56k\") pod \"cinder-operator-controller-manager-644bddb6d8-ws6gj\" (UID: \"7131ae21-9827-4028-9841-fbc480e7b938\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.048147 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tkw6\" (UniqueName: \"kubernetes.io/projected/d288c95d-759c-4b29-8be6-304869f99ae7-kube-api-access-9tkw6\") pod \"barbican-operator-controller-manager-6ff8b75857-fdjgr\" (UID: \"d288c95d-759c-4b29-8be6-304869f99ae7\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.052395 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cbtp4" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.060364 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.063934 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.075775 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.084474 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-62772" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.090149 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.096369 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.097768 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.103538 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.103903 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.104419 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-n7j86" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.141083 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.142992 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.149695 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-wzptx" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.150604 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s97f9\" (UniqueName: \"kubernetes.io/projected/c9a40f4a-1de7-45da-91e9-4f11637452b2-kube-api-access-s97f9\") pod \"designate-operator-controller-manager-84f4f7b77b-pf2ph\" (UID: \"c9a40f4a-1de7-45da-91e9-4f11637452b2\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.150644 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx56k\" (UniqueName: \"kubernetes.io/projected/7131ae21-9827-4028-9841-fbc480e7b938-kube-api-access-zx56k\") pod \"cinder-operator-controller-manager-644bddb6d8-ws6gj\" (UID: \"7131ae21-9827-4028-9841-fbc480e7b938\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.150687 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tkw6\" (UniqueName: \"kubernetes.io/projected/d288c95d-759c-4b29-8be6-304869f99ae7-kube-api-access-9tkw6\") pod \"barbican-operator-controller-manager-6ff8b75857-fdjgr\" (UID: \"d288c95d-759c-4b29-8be6-304869f99ae7\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.150732 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pzz9\" (UniqueName: \"kubernetes.io/projected/b611fd3e-a529-4c90-8e81-c7352004d62f-kube-api-access-8pzz9\") pod \"glance-operator-controller-manager-84958c4d49-q2llj\" (UID: \"b611fd3e-a529-4c90-8e81-c7352004d62f\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.150765 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5sp\" (UniqueName: \"kubernetes.io/projected/908cf55d-1ac7-4814-9f4e-ddb57acb1b76-kube-api-access-st5sp\") pod \"heat-operator-controller-manager-5d889d78cf-ckjvw\" (UID: \"908cf55d-1ac7-4814-9f4e-ddb57acb1b76\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.150785 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltj9b\" (UniqueName: \"kubernetes.io/projected/0dab040d-a74a-48f1-b2e5-fb2fe6de3b58-kube-api-access-ltj9b\") pod \"horizon-operator-controller-manager-9f4696d94-7bcpp\" (UID: \"0dab040d-a74a-48f1-b2e5-fb2fe6de3b58\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.167784 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.168650 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.177741 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.179823 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mnmrn" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.194228 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tkw6\" (UniqueName: \"kubernetes.io/projected/d288c95d-759c-4b29-8be6-304869f99ae7-kube-api-access-9tkw6\") pod \"barbican-operator-controller-manager-6ff8b75857-fdjgr\" (UID: \"d288c95d-759c-4b29-8be6-304869f99ae7\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.194348 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.197579 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s97f9\" (UniqueName: \"kubernetes.io/projected/c9a40f4a-1de7-45da-91e9-4f11637452b2-kube-api-access-s97f9\") pod \"designate-operator-controller-manager-84f4f7b77b-pf2ph\" (UID: \"c9a40f4a-1de7-45da-91e9-4f11637452b2\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.203357 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.204814 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.204989 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx56k\" (UniqueName: \"kubernetes.io/projected/7131ae21-9827-4028-9841-fbc480e7b938-kube-api-access-zx56k\") pod \"cinder-operator-controller-manager-644bddb6d8-ws6gj\" (UID: \"7131ae21-9827-4028-9841-fbc480e7b938\") " pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.210263 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-sdxdd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.211808 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.220249 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.237072 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.251769 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-cert\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.252140 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmx7\" (UniqueName: \"kubernetes.io/projected/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-kube-api-access-9dmx7\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.252179 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pzz9\" (UniqueName: \"kubernetes.io/projected/b611fd3e-a529-4c90-8e81-c7352004d62f-kube-api-access-8pzz9\") pod \"glance-operator-controller-manager-84958c4d49-q2llj\" (UID: \"b611fd3e-a529-4c90-8e81-c7352004d62f\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.252219 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5sp\" (UniqueName: \"kubernetes.io/projected/908cf55d-1ac7-4814-9f4e-ddb57acb1b76-kube-api-access-st5sp\") pod \"heat-operator-controller-manager-5d889d78cf-ckjvw\" (UID: \"908cf55d-1ac7-4814-9f4e-ddb57acb1b76\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.252242 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltj9b\" (UniqueName: \"kubernetes.io/projected/0dab040d-a74a-48f1-b2e5-fb2fe6de3b58-kube-api-access-ltj9b\") pod \"horizon-operator-controller-manager-9f4696d94-7bcpp\" (UID: \"0dab040d-a74a-48f1-b2e5-fb2fe6de3b58\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.252262 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9799\" (UniqueName: \"kubernetes.io/projected/cefaa649-872b-43be-9763-85ee950bb5d6-kube-api-access-d9799\") pod \"ironic-operator-controller-manager-7975b88857-js7xj\" (UID: \"cefaa649-872b-43be-9763-85ee950bb5d6\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.260223 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.261803 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.313604 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-fznmh" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.315239 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pzz9\" (UniqueName: \"kubernetes.io/projected/b611fd3e-a529-4c90-8e81-c7352004d62f-kube-api-access-8pzz9\") pod \"glance-operator-controller-manager-84958c4d49-q2llj\" (UID: \"b611fd3e-a529-4c90-8e81-c7352004d62f\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.316817 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltj9b\" (UniqueName: \"kubernetes.io/projected/0dab040d-a74a-48f1-b2e5-fb2fe6de3b58-kube-api-access-ltj9b\") pod \"horizon-operator-controller-manager-9f4696d94-7bcpp\" (UID: \"0dab040d-a74a-48f1-b2e5-fb2fe6de3b58\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.318202 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.321634 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5sp\" (UniqueName: \"kubernetes.io/projected/908cf55d-1ac7-4814-9f4e-ddb57acb1b76-kube-api-access-st5sp\") pod \"heat-operator-controller-manager-5d889d78cf-ckjvw\" (UID: \"908cf55d-1ac7-4814-9f4e-ddb57acb1b76\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.347181 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.384970 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-cert\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.385188 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dmx7\" (UniqueName: \"kubernetes.io/projected/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-kube-api-access-9dmx7\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.385233 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km5r7\" (UniqueName: \"kubernetes.io/projected/1b1a92f2-46aa-492c-906b-1b86c58ba818-kube-api-access-km5r7\") pod \"mariadb-operator-controller-manager-88c7-vwkw2\" (UID: \"1b1a92f2-46aa-492c-906b-1b86c58ba818\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.385287 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zbl\" (UniqueName: \"kubernetes.io/projected/9d9bdcb3-a944-4379-8dfd-858a022e946a-kube-api-access-t8zbl\") pod \"keystone-operator-controller-manager-5bd55b4bff-vjpqd\" (UID: \"9d9bdcb3-a944-4379-8dfd-858a022e946a\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.385335 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6n5w\" (UniqueName: \"kubernetes.io/projected/ae58a1aa-0503-4387-91cf-fc6f396a180f-kube-api-access-p6n5w\") pod \"manila-operator-controller-manager-6d68dbc695-rjk8q\" (UID: \"ae58a1aa-0503-4387-91cf-fc6f396a180f\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.385401 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9799\" (UniqueName: \"kubernetes.io/projected/cefaa649-872b-43be-9763-85ee950bb5d6-kube-api-access-d9799\") pod \"ironic-operator-controller-manager-7975b88857-js7xj\" (UID: \"cefaa649-872b-43be-9763-85ee950bb5d6\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:16 crc kubenswrapper[4970]: E0930 10:00:16.385764 4970 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Sep 30 10:00:16 crc kubenswrapper[4970]: E0930 10:00:16.385827 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-cert podName:b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7 nodeName:}" failed. No retries permitted until 2025-09-30 10:00:16.885807689 +0000 UTC m=+829.957658623 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-cert") pod "infra-operator-controller-manager-7d857cc749-svx8h" (UID: "b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7") : secret "infra-operator-webhook-server-cert" not found Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.399073 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.399506 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.424667 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9799\" (UniqueName: \"kubernetes.io/projected/cefaa649-872b-43be-9763-85ee950bb5d6-kube-api-access-d9799\") pod \"ironic-operator-controller-manager-7975b88857-js7xj\" (UID: \"cefaa649-872b-43be-9763-85ee950bb5d6\") " pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.430478 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.441670 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dmx7\" (UniqueName: \"kubernetes.io/projected/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-kube-api-access-9dmx7\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.448888 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.450374 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.466214 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-5dnxn" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.484851 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.486590 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km5r7\" (UniqueName: \"kubernetes.io/projected/1b1a92f2-46aa-492c-906b-1b86c58ba818-kube-api-access-km5r7\") pod \"mariadb-operator-controller-manager-88c7-vwkw2\" (UID: \"1b1a92f2-46aa-492c-906b-1b86c58ba818\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.486687 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zbl\" (UniqueName: \"kubernetes.io/projected/9d9bdcb3-a944-4379-8dfd-858a022e946a-kube-api-access-t8zbl\") pod \"keystone-operator-controller-manager-5bd55b4bff-vjpqd\" (UID: \"9d9bdcb3-a944-4379-8dfd-858a022e946a\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.486732 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6n5w\" (UniqueName: \"kubernetes.io/projected/ae58a1aa-0503-4387-91cf-fc6f396a180f-kube-api-access-p6n5w\") pod \"manila-operator-controller-manager-6d68dbc695-rjk8q\" (UID: \"ae58a1aa-0503-4387-91cf-fc6f396a180f\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.501324 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.502628 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.510505 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-rxc45" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.524488 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6n5w\" (UniqueName: \"kubernetes.io/projected/ae58a1aa-0503-4387-91cf-fc6f396a180f-kube-api-access-p6n5w\") pod \"manila-operator-controller-manager-6d68dbc695-rjk8q\" (UID: \"ae58a1aa-0503-4387-91cf-fc6f396a180f\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.543084 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km5r7\" (UniqueName: \"kubernetes.io/projected/1b1a92f2-46aa-492c-906b-1b86c58ba818-kube-api-access-km5r7\") pod \"mariadb-operator-controller-manager-88c7-vwkw2\" (UID: \"1b1a92f2-46aa-492c-906b-1b86c58ba818\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.546368 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbnzq" event={"ID":"2b7d0772-1c7d-443b-a9f7-fa64461d84bc","Type":"ContainerStarted","Data":"4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398"} Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.547066 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zbl\" (UniqueName: \"kubernetes.io/projected/9d9bdcb3-a944-4379-8dfd-858a022e946a-kube-api-access-t8zbl\") pod \"keystone-operator-controller-manager-5bd55b4bff-vjpqd\" (UID: \"9d9bdcb3-a944-4379-8dfd-858a022e946a\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.553413 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.554775 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.563621 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9bhrs" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.569786 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.592435 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn86g\" (UniqueName: \"kubernetes.io/projected/0283cb68-98f4-4dcf-99c0-55ebc251dc19-kube-api-access-zn86g\") pod \"neutron-operator-controller-manager-64d7b59854-8m95z\" (UID: \"0283cb68-98f4-4dcf-99c0-55ebc251dc19\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.609956 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.618205 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.631429 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.633090 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.636554 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.640079 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.641364 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.641594 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-h4f6w" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.650130 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-zg25p" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.657757 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.658159 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.690989 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.700528 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zn86g\" (UniqueName: \"kubernetes.io/projected/0283cb68-98f4-4dcf-99c0-55ebc251dc19-kube-api-access-zn86g\") pod \"neutron-operator-controller-manager-64d7b59854-8m95z\" (UID: \"0283cb68-98f4-4dcf-99c0-55ebc251dc19\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.700610 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtlz\" (UniqueName: \"kubernetes.io/projected/b4a0b16f-5d81-4236-850f-03f628bb3595-kube-api-access-sjtlz\") pod \"octavia-operator-controller-manager-76fcc6dc7c-p9fxz\" (UID: \"b4a0b16f-5d81-4236-850f-03f628bb3595\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.700838 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7944p\" (UniqueName: \"kubernetes.io/projected/815b1df3-7d86-407a-a793-baec392c0f76-kube-api-access-7944p\") pod \"nova-operator-controller-manager-c7c776c96-74r7d\" (UID: \"815b1df3-7d86-407a-a793-baec392c0f76\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.709278 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.726623 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.728517 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.731845 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-tlkxm" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.738045 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.739772 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.741063 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zn86g\" (UniqueName: \"kubernetes.io/projected/0283cb68-98f4-4dcf-99c0-55ebc251dc19-kube-api-access-zn86g\") pod \"neutron-operator-controller-manager-64d7b59854-8m95z\" (UID: \"0283cb68-98f4-4dcf-99c0-55ebc251dc19\") " pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.741504 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.745928 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.747306 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.762357 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-qsg6b" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.807520 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjtlz\" (UniqueName: \"kubernetes.io/projected/b4a0b16f-5d81-4236-850f-03f628bb3595-kube-api-access-sjtlz\") pod \"octavia-operator-controller-manager-76fcc6dc7c-p9fxz\" (UID: \"b4a0b16f-5d81-4236-850f-03f628bb3595\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.807662 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxmx\" (UniqueName: \"kubernetes.io/projected/40f541c2-3a4e-48ec-a01f-a3d395202085-kube-api-access-wsxmx\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.807722 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7944p\" (UniqueName: \"kubernetes.io/projected/815b1df3-7d86-407a-a793-baec392c0f76-kube-api-access-7944p\") pod \"nova-operator-controller-manager-c7c776c96-74r7d\" (UID: \"815b1df3-7d86-407a-a793-baec392c0f76\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.807744 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7rbl\" (UniqueName: \"kubernetes.io/projected/db952a6d-9ea1-482e-aec3-7a93fcd6587c-kube-api-access-q7rbl\") pod \"ovn-operator-controller-manager-9976ff44c-ss9vs\" (UID: \"db952a6d-9ea1-482e-aec3-7a93fcd6587c\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.807775 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.809285 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.813431 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.815050 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.820081 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-z754p" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.825318 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.829265 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.850275 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7944p\" (UniqueName: \"kubernetes.io/projected/815b1df3-7d86-407a-a793-baec392c0f76-kube-api-access-7944p\") pod \"nova-operator-controller-manager-c7c776c96-74r7d\" (UID: \"815b1df3-7d86-407a-a793-baec392c0f76\") " pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.865086 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjtlz\" (UniqueName: \"kubernetes.io/projected/b4a0b16f-5d81-4236-850f-03f628bb3595-kube-api-access-sjtlz\") pod \"octavia-operator-controller-manager-76fcc6dc7c-p9fxz\" (UID: \"b4a0b16f-5d81-4236-850f-03f628bb3595\") " pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.868458 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-sfggm"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.876114 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.884849 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.897105 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-sfggm"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.918433 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920083 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-cert\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920132 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxmx\" (UniqueName: \"kubernetes.io/projected/40f541c2-3a4e-48ec-a01f-a3d395202085-kube-api-access-wsxmx\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920156 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9dl\" (UniqueName: \"kubernetes.io/projected/eea4d20f-1d77-4e9b-bbc3-644ff1a5a314-kube-api-access-ww9dl\") pod \"swift-operator-controller-manager-bc7dc7bd9-v5qrd\" (UID: \"eea4d20f-1d77-4e9b-bbc3-644ff1a5a314\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920176 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stxkh\" (UniqueName: \"kubernetes.io/projected/116a4b20-5a9a-4456-8816-637e0740a792-kube-api-access-stxkh\") pod \"placement-operator-controller-manager-589c58c6c-s6hpz\" (UID: \"116a4b20-5a9a-4456-8816-637e0740a792\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920210 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7rbl\" (UniqueName: \"kubernetes.io/projected/db952a6d-9ea1-482e-aec3-7a93fcd6587c-kube-api-access-q7rbl\") pod \"ovn-operator-controller-manager-9976ff44c-ss9vs\" (UID: \"db952a6d-9ea1-482e-aec3-7a93fcd6587c\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920229 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp4s2\" (UniqueName: \"kubernetes.io/projected/5cfa1456-1b45-4385-8fc5-27dccef45958-kube-api-access-qp4s2\") pod \"telemetry-operator-controller-manager-b8d54b5d7-bdcpn\" (UID: \"5cfa1456-1b45-4385-8fc5-27dccef45958\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920253 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.920288 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75wgf\" (UniqueName: \"kubernetes.io/projected/7f9f19d7-d284-4757-94a1-1a86a8f28b17-kube-api-access-75wgf\") pod \"test-operator-controller-manager-f66b554c6-sfggm\" (UID: \"7f9f19d7-d284-4757-94a1-1a86a8f28b17\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.927889 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wlzx7" Sep 30 10:00:16 crc kubenswrapper[4970]: E0930 10:00:16.929169 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 10:00:16 crc kubenswrapper[4970]: E0930 10:00:16.929217 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert podName:40f541c2-3a4e-48ec-a01f-a3d395202085 nodeName:}" failed. No retries permitted until 2025-09-30 10:00:17.429197975 +0000 UTC m=+830.501048909 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-gnxmq" (UID: "40f541c2-3a4e-48ec-a01f-a3d395202085") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.929635 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7-cert\") pod \"infra-operator-controller-manager-7d857cc749-svx8h\" (UID: \"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7\") " pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.930081 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.931501 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.934574 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-4djrj" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.963363 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j"] Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.982636 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7rbl\" (UniqueName: \"kubernetes.io/projected/db952a6d-9ea1-482e-aec3-7a93fcd6587c-kube-api-access-q7rbl\") pod \"ovn-operator-controller-manager-9976ff44c-ss9vs\" (UID: \"db952a6d-9ea1-482e-aec3-7a93fcd6587c\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:16 crc kubenswrapper[4970]: I0930 10:00:16.990103 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gbnzq" podStartSLOduration=2.746666109 podStartE2EDuration="5.990072104s" podCreationTimestamp="2025-09-30 10:00:11 +0000 UTC" firstStartedPulling="2025-09-30 10:00:12.468362768 +0000 UTC m=+825.540213702" lastFinishedPulling="2025-09-30 10:00:15.711768763 +0000 UTC m=+828.783619697" observedRunningTime="2025-09-30 10:00:16.572350924 +0000 UTC m=+829.644201858" watchObservedRunningTime="2025-09-30 10:00:16.990072104 +0000 UTC m=+830.061923028" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.020732 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxmx\" (UniqueName: \"kubernetes.io/projected/40f541c2-3a4e-48ec-a01f-a3d395202085-kube-api-access-wsxmx\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.026268 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75wgf\" (UniqueName: \"kubernetes.io/projected/7f9f19d7-d284-4757-94a1-1a86a8f28b17-kube-api-access-75wgf\") pod \"test-operator-controller-manager-f66b554c6-sfggm\" (UID: \"7f9f19d7-d284-4757-94a1-1a86a8f28b17\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.026519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww9dl\" (UniqueName: \"kubernetes.io/projected/eea4d20f-1d77-4e9b-bbc3-644ff1a5a314-kube-api-access-ww9dl\") pod \"swift-operator-controller-manager-bc7dc7bd9-v5qrd\" (UID: \"eea4d20f-1d77-4e9b-bbc3-644ff1a5a314\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.026557 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stxkh\" (UniqueName: \"kubernetes.io/projected/116a4b20-5a9a-4456-8816-637e0740a792-kube-api-access-stxkh\") pod \"placement-operator-controller-manager-589c58c6c-s6hpz\" (UID: \"116a4b20-5a9a-4456-8816-637e0740a792\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.026672 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qp4s2\" (UniqueName: \"kubernetes.io/projected/5cfa1456-1b45-4385-8fc5-27dccef45958-kube-api-access-qp4s2\") pod \"telemetry-operator-controller-manager-b8d54b5d7-bdcpn\" (UID: \"5cfa1456-1b45-4385-8fc5-27dccef45958\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.030108 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.077816 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.130161 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf4l7\" (UniqueName: \"kubernetes.io/projected/527884ff-dc23-4a9d-8911-aedf784b5eb1-kube-api-access-pf4l7\") pod \"watcher-operator-controller-manager-76669f99c-gtp7j\" (UID: \"527884ff-dc23-4a9d-8911-aedf784b5eb1\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.165915 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.180636 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75wgf\" (UniqueName: \"kubernetes.io/projected/7f9f19d7-d284-4757-94a1-1a86a8f28b17-kube-api-access-75wgf\") pod \"test-operator-controller-manager-f66b554c6-sfggm\" (UID: \"7f9f19d7-d284-4757-94a1-1a86a8f28b17\") " pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.184345 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww9dl\" (UniqueName: \"kubernetes.io/projected/eea4d20f-1d77-4e9b-bbc3-644ff1a5a314-kube-api-access-ww9dl\") pod \"swift-operator-controller-manager-bc7dc7bd9-v5qrd\" (UID: \"eea4d20f-1d77-4e9b-bbc3-644ff1a5a314\") " pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.188245 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qp4s2\" (UniqueName: \"kubernetes.io/projected/5cfa1456-1b45-4385-8fc5-27dccef45958-kube-api-access-qp4s2\") pod \"telemetry-operator-controller-manager-b8d54b5d7-bdcpn\" (UID: \"5cfa1456-1b45-4385-8fc5-27dccef45958\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.220474 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.222667 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.229380 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.229587 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-hvqbj" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.232135 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf4l7\" (UniqueName: \"kubernetes.io/projected/527884ff-dc23-4a9d-8911-aedf784b5eb1-kube-api-access-pf4l7\") pod \"watcher-operator-controller-manager-76669f99c-gtp7j\" (UID: \"527884ff-dc23-4a9d-8911-aedf784b5eb1\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.241643 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stxkh\" (UniqueName: \"kubernetes.io/projected/116a4b20-5a9a-4456-8816-637e0740a792-kube-api-access-stxkh\") pod \"placement-operator-controller-manager-589c58c6c-s6hpz\" (UID: \"116a4b20-5a9a-4456-8816-637e0740a792\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.247547 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.259825 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf4l7\" (UniqueName: \"kubernetes.io/projected/527884ff-dc23-4a9d-8911-aedf784b5eb1-kube-api-access-pf4l7\") pod \"watcher-operator-controller-manager-76669f99c-gtp7j\" (UID: \"527884ff-dc23-4a9d-8911-aedf784b5eb1\") " pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.266081 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.271628 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.336542 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33f5230-0a43-418a-a25c-690de07ddc21-cert\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.336616 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgnbn\" (UniqueName: \"kubernetes.io/projected/b33f5230-0a43-418a-a25c-690de07ddc21-kube-api-access-zgnbn\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.343385 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.344450 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.349183 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gk26s" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.356644 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.359733 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.365550 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.370906 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.404847 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.431458 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.438230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33f5230-0a43-418a-a25c-690de07ddc21-cert\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.438285 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgnbn\" (UniqueName: \"kubernetes.io/projected/b33f5230-0a43-418a-a25c-690de07ddc21-kube-api-access-zgnbn\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.438383 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:17 crc kubenswrapper[4970]: E0930 10:00:17.438623 4970 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 10:00:17 crc kubenswrapper[4970]: E0930 10:00:17.438854 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert podName:40f541c2-3a4e-48ec-a01f-a3d395202085 nodeName:}" failed. No retries permitted until 2025-09-30 10:00:18.438666831 +0000 UTC m=+831.510517765 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert") pod "openstack-baremetal-operator-controller-manager-6d776955-gnxmq" (UID: "40f541c2-3a4e-48ec-a01f-a3d395202085") : secret "openstack-baremetal-operator-webhook-server-cert" not found Sep 30 10:00:17 crc kubenswrapper[4970]: E0930 10:00:17.439575 4970 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Sep 30 10:00:17 crc kubenswrapper[4970]: E0930 10:00:17.439607 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b33f5230-0a43-418a-a25c-690de07ddc21-cert podName:b33f5230-0a43-418a-a25c-690de07ddc21 nodeName:}" failed. No retries permitted until 2025-09-30 10:00:17.939597747 +0000 UTC m=+831.011448681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b33f5230-0a43-418a-a25c-690de07ddc21-cert") pod "openstack-operator-controller-manager-5d64b45c9c-7q8rq" (UID: "b33f5230-0a43-418a-a25c-690de07ddc21") : secret "webhook-server-cert" not found Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.476678 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgnbn\" (UniqueName: \"kubernetes.io/projected/b33f5230-0a43-418a-a25c-690de07ddc21-kube-api-access-zgnbn\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.540180 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99674\" (UniqueName: \"kubernetes.io/projected/7f744173-6696-4797-a55c-85b498bff4da-kube-api-access-99674\") pod \"rabbitmq-cluster-operator-manager-79d8469568-kpw26\" (UID: \"7f744173-6696-4797-a55c-85b498bff4da\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.591116 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" event={"ID":"d288c95d-759c-4b29-8be6-304869f99ae7","Type":"ContainerStarted","Data":"2ba949a9954a6f76d718953de5fb9269be8a23566b81f46da71b8362c6ad1eca"} Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.595377 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" event={"ID":"c9a40f4a-1de7-45da-91e9-4f11637452b2","Type":"ContainerStarted","Data":"7fdfb2335bf6b660ae39dfa106637f71057e5b8f64ee2779c9e4e7c8ca36fb83"} Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.642985 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99674\" (UniqueName: \"kubernetes.io/projected/7f744173-6696-4797-a55c-85b498bff4da-kube-api-access-99674\") pod \"rabbitmq-cluster-operator-manager-79d8469568-kpw26\" (UID: \"7f744173-6696-4797-a55c-85b498bff4da\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.721928 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99674\" (UniqueName: \"kubernetes.io/projected/7f744173-6696-4797-a55c-85b498bff4da-kube-api-access-99674\") pod \"rabbitmq-cluster-operator-manager-79d8469568-kpw26\" (UID: \"7f744173-6696-4797-a55c-85b498bff4da\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.777844 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.780929 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj"] Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.784865 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.938708 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-885ws" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="registry-server" probeResult="failure" output=< Sep 30 10:00:17 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Sep 30 10:00:17 crc kubenswrapper[4970]: > Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.953965 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33f5230-0a43-418a-a25c-690de07ddc21-cert\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:17 crc kubenswrapper[4970]: I0930 10:00:17.973956 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b33f5230-0a43-418a-a25c-690de07ddc21-cert\") pod \"openstack-operator-controller-manager-5d64b45c9c-7q8rq\" (UID: \"b33f5230-0a43-418a-a25c-690de07ddc21\") " pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.068497 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.417772 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp"] Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.437027 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw"] Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.466135 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.471657 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/40f541c2-3a4e-48ec-a01f-a3d395202085-cert\") pod \"openstack-baremetal-operator-controller-manager-6d776955-gnxmq\" (UID: \"40f541c2-3a4e-48ec-a01f-a3d395202085\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.497607 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.561104 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj"] Sep 30 10:00:18 crc kubenswrapper[4970]: W0930 10:00:18.567978 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcefaa649_872b_43be_9763_85ee950bb5d6.slice/crio-2534efdef7e648cec78be538d150a24a29675818dfb87eeda8f5a2ee50e4c2b3 WatchSource:0}: Error finding container 2534efdef7e648cec78be538d150a24a29675818dfb87eeda8f5a2ee50e4c2b3: Status 404 returned error can't find the container with id 2534efdef7e648cec78be538d150a24a29675818dfb87eeda8f5a2ee50e4c2b3 Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.617303 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" event={"ID":"cefaa649-872b-43be-9763-85ee950bb5d6","Type":"ContainerStarted","Data":"2534efdef7e648cec78be538d150a24a29675818dfb87eeda8f5a2ee50e4c2b3"} Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.618841 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" event={"ID":"908cf55d-1ac7-4814-9f4e-ddb57acb1b76","Type":"ContainerStarted","Data":"ef79c674b33a61710ebd2420cc850ea877a9edf457cb3bdbc90635bad49863b8"} Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.620447 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" event={"ID":"b611fd3e-a529-4c90-8e81-c7352004d62f","Type":"ContainerStarted","Data":"d81dd13e5c2c0f3d81af8c8863fb19906ea57ab71833093b39eed9117f0ddd9c"} Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.664596 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" event={"ID":"7131ae21-9827-4028-9841-fbc480e7b938","Type":"ContainerStarted","Data":"9c6e68ffd9df38d6410c9f0213ded8fdf9fbe050f8369e7b3fe5dbfc28e05ecb"} Sep 30 10:00:18 crc kubenswrapper[4970]: I0930 10:00:18.680249 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" event={"ID":"0dab040d-a74a-48f1-b2e5-fb2fe6de3b58","Type":"ContainerStarted","Data":"8585db76a322a8e92a182fc59b6e5f2c89dce4fb2e358a6ff9f18759952fa507"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.116124 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.117736 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.156250 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.237423 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.237546 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.249579 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.257034 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.288925 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.302405 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.318022 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.323431 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.334593 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-f66b554c6-sfggm"] Sep 30 10:00:19 crc kubenswrapper[4970]: W0930 10:00:19.347555 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f9f19d7_d284_4757_94a1_1a86a8f28b17.slice/crio-14e4feffc7aa00ea0a3f905078c007387597a64aae6ba852069d290d92e15a73 WatchSource:0}: Error finding container 14e4feffc7aa00ea0a3f905078c007387597a64aae6ba852069d290d92e15a73: Status 404 returned error can't find the container with id 14e4feffc7aa00ea0a3f905078c007387597a64aae6ba852069d290d92e15a73 Sep 30 10:00:19 crc kubenswrapper[4970]: E0930 10:00:19.356142 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-75wgf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-f66b554c6-sfggm_openstack-operators(7f9f19d7-d284-4757-94a1-1a86a8f28b17): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.455887 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.460845 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn"] Sep 30 10:00:19 crc kubenswrapper[4970]: W0930 10:00:19.472768 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cfa1456_1b45_4385_8fc5_27dccef45958.slice/crio-6c597ede54ee257bf20c5e1172ffd3bec94baac50d8b12664d2adf5bc04eb5b7 WatchSource:0}: Error finding container 6c597ede54ee257bf20c5e1172ffd3bec94baac50d8b12664d2adf5bc04eb5b7: Status 404 returned error can't find the container with id 6c597ede54ee257bf20c5e1172ffd3bec94baac50d8b12664d2adf5bc04eb5b7 Sep 30 10:00:19 crc kubenswrapper[4970]: E0930 10:00:19.478970 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qp4s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-b8d54b5d7-bdcpn_openstack-operators(5cfa1456-1b45-4385-8fc5-27dccef45958): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 10:00:19 crc kubenswrapper[4970]: W0930 10:00:19.481119 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4a0b16f_5d81_4236_850f_03f628bb3595.slice/crio-0918ed3d5a34630fb964ccc0ebff53fa9ef12a611e99dfb59cbc53ff437b3b7b WatchSource:0}: Error finding container 0918ed3d5a34630fb964ccc0ebff53fa9ef12a611e99dfb59cbc53ff437b3b7b: Status 404 returned error can't find the container with id 0918ed3d5a34630fb964ccc0ebff53fa9ef12a611e99dfb59cbc53ff437b3b7b Sep 30 10:00:19 crc kubenswrapper[4970]: E0930 10:00:19.489309 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sjtlz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-76fcc6dc7c-p9fxz_openstack-operators(b4a0b16f-5d81-4236-850f-03f628bb3595): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.502723 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.553451 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq"] Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.688739 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" event={"ID":"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7","Type":"ContainerStarted","Data":"983dd36b41aa479e6b2153524ba44ebe9c93d36c73625d524ccf913dd891dd00"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.690083 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" event={"ID":"1b1a92f2-46aa-492c-906b-1b86c58ba818","Type":"ContainerStarted","Data":"68b68c80292bc5a072e4188550f4e3c31dd6185224cbd99be47b285f99a7af9d"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.691530 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" event={"ID":"eea4d20f-1d77-4e9b-bbc3-644ff1a5a314","Type":"ContainerStarted","Data":"35e43cfd3d82ac62b1ee55dcad36240918f47b456b959e78c69af3ef34bcdc6c"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.693945 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" event={"ID":"b33f5230-0a43-418a-a25c-690de07ddc21","Type":"ContainerStarted","Data":"cc1f0897d56a6f477d21cefee15ce489589e6bc42ea43959dc18532e1e53094d"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.696484 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" event={"ID":"40f541c2-3a4e-48ec-a01f-a3d395202085","Type":"ContainerStarted","Data":"5596dde0ae803af6ca27644d7acf9d5d8fe3bf54a6bd79b50ada576d007d07b9"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.698049 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" event={"ID":"9d9bdcb3-a944-4379-8dfd-858a022e946a","Type":"ContainerStarted","Data":"2bcc4253a1449dbe9b824c390ad7496e05d2a7ff2a9be2defdc70bf661ac8d96"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.700132 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" event={"ID":"5cfa1456-1b45-4385-8fc5-27dccef45958","Type":"ContainerStarted","Data":"6c597ede54ee257bf20c5e1172ffd3bec94baac50d8b12664d2adf5bc04eb5b7"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.701430 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" event={"ID":"0283cb68-98f4-4dcf-99c0-55ebc251dc19","Type":"ContainerStarted","Data":"e9b3f5a6da5018f0fafade48bf8304031b1c1d77d5f35cae518da4de7bcf22a6"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.703517 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" event={"ID":"815b1df3-7d86-407a-a793-baec392c0f76","Type":"ContainerStarted","Data":"3a8639afc0897abd6b9cac0aad5132a60fa48971218fad813c5f7dccb9c0e513"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.705080 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" event={"ID":"527884ff-dc23-4a9d-8911-aedf784b5eb1","Type":"ContainerStarted","Data":"73e73bbd70be9a8891ba2dec2673a98f84ba85f9512f73fef17a1355b33bf7ae"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.706219 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" event={"ID":"7f744173-6696-4797-a55c-85b498bff4da","Type":"ContainerStarted","Data":"b0a26465019fa00fd1e25007704017b7a6d7e1b0441e99a48249cb1b06f0f0ab"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.708641 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" event={"ID":"db952a6d-9ea1-482e-aec3-7a93fcd6587c","Type":"ContainerStarted","Data":"7581e00fa647b2cdf508c499100a387ff0efd5fc332e3549d3b2ad333c932980"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.710156 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" event={"ID":"116a4b20-5a9a-4456-8816-637e0740a792","Type":"ContainerStarted","Data":"21ab4fadf1eea171c5d7dbec01fde363468319924fc400fed64cf1dc2f577aa2"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.711653 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" event={"ID":"b4a0b16f-5d81-4236-850f-03f628bb3595","Type":"ContainerStarted","Data":"0918ed3d5a34630fb964ccc0ebff53fa9ef12a611e99dfb59cbc53ff437b3b7b"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.713408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" event={"ID":"7f9f19d7-d284-4757-94a1-1a86a8f28b17","Type":"ContainerStarted","Data":"14e4feffc7aa00ea0a3f905078c007387597a64aae6ba852069d290d92e15a73"} Sep 30 10:00:19 crc kubenswrapper[4970]: I0930 10:00:19.714616 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" event={"ID":"ae58a1aa-0503-4387-91cf-fc6f396a180f","Type":"ContainerStarted","Data":"36ff47ffcc6486a685468749c198e7a6fcf321b9d53249d374a98426478f26e3"} Sep 30 10:00:19 crc kubenswrapper[4970]: E0930 10:00:19.824058 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" podUID="5cfa1456-1b45-4385-8fc5-27dccef45958" Sep 30 10:00:19 crc kubenswrapper[4970]: E0930 10:00:19.880329 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" podUID="b4a0b16f-5d81-4236-850f-03f628bb3595" Sep 30 10:00:19 crc kubenswrapper[4970]: E0930 10:00:19.904636 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" podUID="7f9f19d7-d284-4757-94a1-1a86a8f28b17" Sep 30 10:00:20 crc kubenswrapper[4970]: I0930 10:00:20.728310 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" event={"ID":"7f9f19d7-d284-4757-94a1-1a86a8f28b17","Type":"ContainerStarted","Data":"d705189743f1a7f2f175f3c174b31e7a8d77b114d5766f2f4a711ccb1fb083b9"} Sep 30 10:00:20 crc kubenswrapper[4970]: E0930 10:00:20.732253 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80\\\"\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" podUID="7f9f19d7-d284-4757-94a1-1a86a8f28b17" Sep 30 10:00:20 crc kubenswrapper[4970]: I0930 10:00:20.734430 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" event={"ID":"b4a0b16f-5d81-4236-850f-03f628bb3595","Type":"ContainerStarted","Data":"f48f257185f6cdd09b3a386d6e5b505d50173ee503b6a28f19857b7e23dc6216"} Sep 30 10:00:20 crc kubenswrapper[4970]: E0930 10:00:20.736068 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" podUID="b4a0b16f-5d81-4236-850f-03f628bb3595" Sep 30 10:00:20 crc kubenswrapper[4970]: I0930 10:00:20.740891 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" event={"ID":"5cfa1456-1b45-4385-8fc5-27dccef45958","Type":"ContainerStarted","Data":"51e4c4fcbdcec3b6211646b887b5583b33e99bc01c1e74f1e38eebf023587d9d"} Sep 30 10:00:20 crc kubenswrapper[4970]: E0930 10:00:20.742676 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" podUID="5cfa1456-1b45-4385-8fc5-27dccef45958" Sep 30 10:00:20 crc kubenswrapper[4970]: I0930 10:00:20.745925 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" event={"ID":"b33f5230-0a43-418a-a25c-690de07ddc21","Type":"ContainerStarted","Data":"a93e012cc4428ef861966da6e75779397e0b7558785704acbeb4566caefe2ec2"} Sep 30 10:00:20 crc kubenswrapper[4970]: I0930 10:00:20.745966 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" event={"ID":"b33f5230-0a43-418a-a25c-690de07ddc21","Type":"ContainerStarted","Data":"077c00a7fd62c95c2cf541868b5217cef09b4814e58bc592e6e46ef1c7066315"} Sep 30 10:00:20 crc kubenswrapper[4970]: I0930 10:00:20.746506 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:21 crc kubenswrapper[4970]: I0930 10:00:21.514590 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:21 crc kubenswrapper[4970]: I0930 10:00:21.514670 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:21 crc kubenswrapper[4970]: I0930 10:00:21.558368 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:21 crc kubenswrapper[4970]: I0930 10:00:21.586658 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" podStartSLOduration=5.586626283 podStartE2EDuration="5.586626283s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:00:20.831293553 +0000 UTC m=+833.903144487" watchObservedRunningTime="2025-09-30 10:00:21.586626283 +0000 UTC m=+834.658477227" Sep 30 10:00:21 crc kubenswrapper[4970]: E0930 10:00:21.756287 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" podUID="5cfa1456-1b45-4385-8fc5-27dccef45958" Sep 30 10:00:21 crc kubenswrapper[4970]: E0930 10:00:21.756334 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:a303e460aec09217f90043b8ff19c01061af003b614833b33a593df9c00ddf80\\\"\"" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" podUID="7f9f19d7-d284-4757-94a1-1a86a8f28b17" Sep 30 10:00:21 crc kubenswrapper[4970]: E0930 10:00:21.756438 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:4d08afd31dc5ded10c54a5541f514ac351e9b40a183285b3db27d0757a6354c8\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" podUID="b4a0b16f-5d81-4236-850f-03f628bb3595" Sep 30 10:00:21 crc kubenswrapper[4970]: I0930 10:00:21.807566 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:21 crc kubenswrapper[4970]: I0930 10:00:21.873620 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbnzq"] Sep 30 10:00:23 crc kubenswrapper[4970]: I0930 10:00:23.769926 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gbnzq" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="registry-server" containerID="cri-o://4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398" gracePeriod=2 Sep 30 10:00:24 crc kubenswrapper[4970]: I0930 10:00:24.781923 4970 generic.go:334] "Generic (PLEG): container finished" podID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerID="4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398" exitCode=0 Sep 30 10:00:24 crc kubenswrapper[4970]: I0930 10:00:24.782041 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbnzq" event={"ID":"2b7d0772-1c7d-443b-a9f7-fa64461d84bc","Type":"ContainerDied","Data":"4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398"} Sep 30 10:00:26 crc kubenswrapper[4970]: I0930 10:00:26.787393 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:26 crc kubenswrapper[4970]: I0930 10:00:26.834002 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:27 crc kubenswrapper[4970]: I0930 10:00:27.024667 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-885ws"] Sep 30 10:00:27 crc kubenswrapper[4970]: I0930 10:00:27.809876 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-885ws" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="registry-server" containerID="cri-o://3b31d8a9a5588c03aa7590d2962884308ea168abe7d4d5fcd7e685c48d9ea2b0" gracePeriod=2 Sep 30 10:00:28 crc kubenswrapper[4970]: I0930 10:00:28.075480 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-5d64b45c9c-7q8rq" Sep 30 10:00:28 crc kubenswrapper[4970]: I0930 10:00:28.820749 4970 generic.go:334] "Generic (PLEG): container finished" podID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerID="3b31d8a9a5588c03aa7590d2962884308ea168abe7d4d5fcd7e685c48d9ea2b0" exitCode=0 Sep 30 10:00:28 crc kubenswrapper[4970]: I0930 10:00:28.820803 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-885ws" event={"ID":"ec858bf1-5d08-410d-b66e-4bb2e6c240b9","Type":"ContainerDied","Data":"3b31d8a9a5588c03aa7590d2962884308ea168abe7d4d5fcd7e685c48d9ea2b0"} Sep 30 10:00:31 crc kubenswrapper[4970]: E0930 10:00:31.515994 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398 is running failed: container process not found" containerID="4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 10:00:31 crc kubenswrapper[4970]: E0930 10:00:31.516769 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398 is running failed: container process not found" containerID="4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 10:00:31 crc kubenswrapper[4970]: E0930 10:00:31.517563 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398 is running failed: container process not found" containerID="4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 10:00:31 crc kubenswrapper[4970]: E0930 10:00:31.517602 4970 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-gbnzq" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="registry-server" Sep 30 10:00:33 crc kubenswrapper[4970]: E0930 10:00:33.860383 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72" Sep 30 10:00:33 crc kubenswrapper[4970]: E0930 10:00:33.861110 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pzz9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-84958c4d49-q2llj_openstack-operators(b611fd3e-a529-4c90-8e81-c7352004d62f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:00:34 crc kubenswrapper[4970]: E0930 10:00:34.430769 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2" Sep 30 10:00:34 crc kubenswrapper[4970]: E0930 10:00:34.431360 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-stxkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-589c58c6c-s6hpz_openstack-operators(116a4b20-5a9a-4456-8816-637e0740a792): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:00:34 crc kubenswrapper[4970]: I0930 10:00:34.822370 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:00:34 crc kubenswrapper[4970]: I0930 10:00:34.822459 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:00:34 crc kubenswrapper[4970]: I0930 10:00:34.822524 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:00:34 crc kubenswrapper[4970]: I0930 10:00:34.823347 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"00bf1152552c1c478a9068852104f804953a83e3c45cb022e488704fa11cacf9"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:00:34 crc kubenswrapper[4970]: I0930 10:00:34.823439 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://00bf1152552c1c478a9068852104f804953a83e3c45cb022e488704fa11cacf9" gracePeriod=600 Sep 30 10:00:34 crc kubenswrapper[4970]: E0930 10:00:34.944496 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b" Sep 30 10:00:34 crc kubenswrapper[4970]: E0930 10:00:34.944719 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-99674,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-79d8469568-kpw26_openstack-operators(7f744173-6696-4797-a55c-85b498bff4da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:00:34 crc kubenswrapper[4970]: E0930 10:00:34.945951 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" podUID="7f744173-6696-4797-a55c-85b498bff4da" Sep 30 10:00:34 crc kubenswrapper[4970]: I0930 10:00:34.983934 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.039031 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.052981 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-catalog-content\") pod \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.053077 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2shz\" (UniqueName: \"kubernetes.io/projected/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-kube-api-access-n2shz\") pod \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.053138 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-utilities\") pod \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.053168 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prr2p\" (UniqueName: \"kubernetes.io/projected/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-kube-api-access-prr2p\") pod \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.053208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-utilities\") pod \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\" (UID: \"ec858bf1-5d08-410d-b66e-4bb2e6c240b9\") " Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.053275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-catalog-content\") pod \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\" (UID: \"2b7d0772-1c7d-443b-a9f7-fa64461d84bc\") " Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.054912 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-utilities" (OuterVolumeSpecName: "utilities") pod "2b7d0772-1c7d-443b-a9f7-fa64461d84bc" (UID: "2b7d0772-1c7d-443b-a9f7-fa64461d84bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.055722 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-utilities" (OuterVolumeSpecName: "utilities") pod "ec858bf1-5d08-410d-b66e-4bb2e6c240b9" (UID: "ec858bf1-5d08-410d-b66e-4bb2e6c240b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.074684 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-kube-api-access-n2shz" (OuterVolumeSpecName: "kube-api-access-n2shz") pod "2b7d0772-1c7d-443b-a9f7-fa64461d84bc" (UID: "2b7d0772-1c7d-443b-a9f7-fa64461d84bc"). InnerVolumeSpecName "kube-api-access-n2shz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.076829 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-kube-api-access-prr2p" (OuterVolumeSpecName: "kube-api-access-prr2p") pod "ec858bf1-5d08-410d-b66e-4bb2e6c240b9" (UID: "ec858bf1-5d08-410d-b66e-4bb2e6c240b9"). InnerVolumeSpecName "kube-api-access-prr2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.090059 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b7d0772-1c7d-443b-a9f7-fa64461d84bc" (UID: "2b7d0772-1c7d-443b-a9f7-fa64461d84bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.144320 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ec858bf1-5d08-410d-b66e-4bb2e6c240b9" (UID: "ec858bf1-5d08-410d-b66e-4bb2e6c240b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.155353 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.155381 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prr2p\" (UniqueName: \"kubernetes.io/projected/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-kube-api-access-prr2p\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.155390 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.155399 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.155408 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec858bf1-5d08-410d-b66e-4bb2e6c240b9-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.155417 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2shz\" (UniqueName: \"kubernetes.io/projected/2b7d0772-1c7d-443b-a9f7-fa64461d84bc-kube-api-access-n2shz\") on node \"crc\" DevicePath \"\"" Sep 30 10:00:35 crc kubenswrapper[4970]: E0930 10:00:35.761222 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" podUID="116a4b20-5a9a-4456-8816-637e0740a792" Sep 30 10:00:35 crc kubenswrapper[4970]: E0930 10:00:35.769530 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" podUID="b611fd3e-a529-4c90-8e81-c7352004d62f" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.876636 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="00bf1152552c1c478a9068852104f804953a83e3c45cb022e488704fa11cacf9" exitCode=0 Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.876718 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"00bf1152552c1c478a9068852104f804953a83e3c45cb022e488704fa11cacf9"} Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.876772 4970 scope.go:117] "RemoveContainer" containerID="dc4842ae8bfd4b8ab89c01d01940a6e7946834cd5975d69ed84b67c89bbc8813" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.878890 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" event={"ID":"b611fd3e-a529-4c90-8e81-c7352004d62f","Type":"ContainerStarted","Data":"148767ef9f4b6b04444adbac8fd968ded4090e5823a8d59e8e52fd12e64551f7"} Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.883326 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-885ws" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.883322 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-885ws" event={"ID":"ec858bf1-5d08-410d-b66e-4bb2e6c240b9","Type":"ContainerDied","Data":"c242f7f5b1098495f0ccf92d1be453aa25c13e6df44b3607725564cb6ffdbd78"} Sep 30 10:00:35 crc kubenswrapper[4970]: E0930 10:00:35.886078 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72\\\"\"" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" podUID="b611fd3e-a529-4c90-8e81-c7352004d62f" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.888702 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gbnzq" event={"ID":"2b7d0772-1c7d-443b-a9f7-fa64461d84bc","Type":"ContainerDied","Data":"b029d1ff22f2cb5726cc218e243437bf04b2c9c3fe9f1ea39890061227c03f61"} Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.888875 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gbnzq" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.896018 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" event={"ID":"db952a6d-9ea1-482e-aec3-7a93fcd6587c","Type":"ContainerStarted","Data":"8aa20156977aeb69acff84c4c9d5bd4275d0a6725c9f93f5dc4c57c5a47a3bd8"} Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.898671 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" event={"ID":"116a4b20-5a9a-4456-8816-637e0740a792","Type":"ContainerStarted","Data":"edbb6a213c741d60c3722bbef6678be7790c7dbe889ab4c7caafe4f34d68979f"} Sep 30 10:00:35 crc kubenswrapper[4970]: E0930 10:00:35.925077 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" podUID="116a4b20-5a9a-4456-8816-637e0740a792" Sep 30 10:00:35 crc kubenswrapper[4970]: E0930 10:00:35.925631 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:225524223bf2a7f3a4ce95958fc9ca6fdab02745fb70374e8ff5bf1ddaceda4b\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" podUID="7f744173-6696-4797-a55c-85b498bff4da" Sep 30 10:00:35 crc kubenswrapper[4970]: I0930 10:00:35.979760 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-885ws"] Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.004209 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-885ws"] Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.017154 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbnzq"] Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.018667 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gbnzq"] Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.190254 4970 scope.go:117] "RemoveContainer" containerID="3b31d8a9a5588c03aa7590d2962884308ea168abe7d4d5fcd7e685c48d9ea2b0" Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.275966 4970 scope.go:117] "RemoveContainer" containerID="731aecc62e0f796ce8796e9c160dc955298865605275b5704f47bada49f0e999" Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.338546 4970 scope.go:117] "RemoveContainer" containerID="ec7c9b0b972845f0ef098cbf68087a868dd52335ff77a84c5ec22e55769507ff" Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.394377 4970 scope.go:117] "RemoveContainer" containerID="4030b267a35724d350401aff433288f473b403ac03b975ce3838282e4db27398" Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.428713 4970 scope.go:117] "RemoveContainer" containerID="2ac34a97a5d06215e6048258c2022004343d6191b6bff097c3f525ce72d5e140" Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.478225 4970 scope.go:117] "RemoveContainer" containerID="06c172fcc432ebf572c0f41c4f50ac5f865795b08631ff1a66f8c82d3729daf7" Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.925702 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" event={"ID":"0283cb68-98f4-4dcf-99c0-55ebc251dc19","Type":"ContainerStarted","Data":"2377c72b5f8cda44d3eecce96a4f54bb051da5de49c23eb3f9fd0a2513d55830"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.937927 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" event={"ID":"40f541c2-3a4e-48ec-a01f-a3d395202085","Type":"ContainerStarted","Data":"3f213b72e4d4d8ae502eef35087a29b5bbcd5135e9b3264af71f14d91a0820e4"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.948481 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" event={"ID":"815b1df3-7d86-407a-a793-baec392c0f76","Type":"ContainerStarted","Data":"beb75cfed489418b58f4fb178c98564cdf790de4a6a15bef84cfea7565f3864b"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.950444 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"7e145a3822b96bea172852c66d616934dcbd854c3400bad7729b559f1279373d"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.951411 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" event={"ID":"0dab040d-a74a-48f1-b2e5-fb2fe6de3b58","Type":"ContainerStarted","Data":"5c63bf2e96941031bbfc72fb15fb65b4c8b9618e11e518121ae2af9f3607d30d"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.952374 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" event={"ID":"d288c95d-759c-4b29-8be6-304869f99ae7","Type":"ContainerStarted","Data":"8194f676fab9aa1e0d039906c5f32d025be302cc6cda3faafa69dd134490bac1"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.978449 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" event={"ID":"eea4d20f-1d77-4e9b-bbc3-644ff1a5a314","Type":"ContainerStarted","Data":"966668ec713cd124a1a5686c9f4555391355fe44eeea81f4039c5fbf0b1c6ed1"} Sep 30 10:00:36 crc kubenswrapper[4970]: I0930 10:00:36.994600 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" event={"ID":"908cf55d-1ac7-4814-9f4e-ddb57acb1b76","Type":"ContainerStarted","Data":"d55179613ce9e6af4b1884d1e6dd715046ac613edeb0ccc0984a9fa6630459aa"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.002798 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" event={"ID":"ae58a1aa-0503-4387-91cf-fc6f396a180f","Type":"ContainerStarted","Data":"50ea2466c7c925f24268126af6babde27066f3615dd78f280f6f139c65408032"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.017435 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" event={"ID":"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7","Type":"ContainerStarted","Data":"d82181e81698071449dbd74d68873bd83b427b8300a46a074264585c94cb1df8"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.026409 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" event={"ID":"c9a40f4a-1de7-45da-91e9-4f11637452b2","Type":"ContainerStarted","Data":"e37b9225762d2ae1009623fd40a785bac0c201e2508f154fb124f3f8eba3eff9"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.040510 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" event={"ID":"9d9bdcb3-a944-4379-8dfd-858a022e946a","Type":"ContainerStarted","Data":"31ca1f5e6d30c711270c147b0116735904cc430fc82364661280862be11b5371"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.064236 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" event={"ID":"cefaa649-872b-43be-9763-85ee950bb5d6","Type":"ContainerStarted","Data":"13d900380986b620e563b77931bdc6a6c1efeac27fbbdd862a8f43655429ce45"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.088543 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" event={"ID":"527884ff-dc23-4a9d-8911-aedf784b5eb1","Type":"ContainerStarted","Data":"a838254da9c76b2b0e742817aa657974f198724cae1afe4faa74a0819d41ade0"} Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.108594 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" event={"ID":"1b1a92f2-46aa-492c-906b-1b86c58ba818","Type":"ContainerStarted","Data":"b78c9c2eee88dfc12219356e9960945b0215e81e287cc20e54a0007f434fead9"} Sep 30 10:00:37 crc kubenswrapper[4970]: E0930 10:00:37.110948 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:21792a2317c0a55e40b2a02a7d5d4682b76538ed2a2e0633199aa395e60ecc72\\\"\"" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" podUID="b611fd3e-a529-4c90-8e81-c7352004d62f" Sep 30 10:00:37 crc kubenswrapper[4970]: E0930 10:00:37.111437 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:a6b3408d79df6b6d4a467e49defaa4a9d9c088c94d0605a4fee0030c9ccc84d2\\\"\"" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" podUID="116a4b20-5a9a-4456-8816-637e0740a792" Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.678778 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" path="/var/lib/kubelet/pods/2b7d0772-1c7d-443b-a9f7-fa64461d84bc/volumes" Sep 30 10:00:37 crc kubenswrapper[4970]: I0930 10:00:37.680136 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" path="/var/lib/kubelet/pods/ec858bf1-5d08-410d-b66e-4bb2e6c240b9/volumes" Sep 30 10:00:38 crc kubenswrapper[4970]: I0930 10:00:38.116227 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" event={"ID":"7131ae21-9827-4028-9841-fbc480e7b938","Type":"ContainerStarted","Data":"c2d67fd6e897c25a77f19fb6b70db0d633b27250e0b8b1057f33db59e1f1c2ef"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.162617 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" event={"ID":"cefaa649-872b-43be-9763-85ee950bb5d6","Type":"ContainerStarted","Data":"c9cb7568805a3b6d9d5d034483f37e87e7183cfc36d6a9a7a0fae4bbf2db3c0d"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.165121 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" event={"ID":"db952a6d-9ea1-482e-aec3-7a93fcd6587c","Type":"ContainerStarted","Data":"5ab3a6449c0df763a013c5020f2203476956d80408138d9dd3a9d49df7edb16e"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.167126 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" event={"ID":"815b1df3-7d86-407a-a793-baec392c0f76","Type":"ContainerStarted","Data":"158d8b7cb64b8b707fc845d577cda14161660a1bdc2b1a0f4fc7f5968f136f1b"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.169957 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" event={"ID":"d288c95d-759c-4b29-8be6-304869f99ae7","Type":"ContainerStarted","Data":"383a66d2e265c7b05d3ee3e47b725c6a5235078b364aef3f6c92a261e4d9a9e2"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.171888 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" event={"ID":"0283cb68-98f4-4dcf-99c0-55ebc251dc19","Type":"ContainerStarted","Data":"15aac64551731988bad7e6d4d907e46bca902195328191749d73b4f54ab0896c"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.173641 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" event={"ID":"527884ff-dc23-4a9d-8911-aedf784b5eb1","Type":"ContainerStarted","Data":"8aea7f919a4d87715bd7ab5741267f7b40a4a74d6767db1cfd47a1e76c5a863e"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.175215 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" event={"ID":"eea4d20f-1d77-4e9b-bbc3-644ff1a5a314","Type":"ContainerStarted","Data":"2266b82b6881980456545fde9d58b0ef95bd28ab1dafbe30073023eeef454d71"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.176792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" event={"ID":"908cf55d-1ac7-4814-9f4e-ddb57acb1b76","Type":"ContainerStarted","Data":"e4653c9c3e8894439f3458d23ed8c676bc7a200f9496f061d71124deef8f56f8"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.178182 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" event={"ID":"ae58a1aa-0503-4387-91cf-fc6f396a180f","Type":"ContainerStarted","Data":"f5957c8d89be8278bdca02f13e6833a374750798526b02a916d92370ac1e554c"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.183087 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" event={"ID":"40f541c2-3a4e-48ec-a01f-a3d395202085","Type":"ContainerStarted","Data":"9479ae83e6ebe79a3fdaf1ba329e3ca2f3573ea65147d368807e2ea1ceadbcdf"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.184776 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" event={"ID":"b4a0b16f-5d81-4236-850f-03f628bb3595","Type":"ContainerStarted","Data":"2f1616d14a57b8dba8b2bc6810d2309582533d49a41a806f1cc1e60756eb2615"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.186410 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" event={"ID":"b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7","Type":"ContainerStarted","Data":"1bab721caff93259fe717d2740aa16944a25fdd6842d231ed8b9b593455d99af"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.188354 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" event={"ID":"c9a40f4a-1de7-45da-91e9-4f11637452b2","Type":"ContainerStarted","Data":"cd4ade4e7aa0fa320deea49b259cfe9958665e3abe0189fff21d43d6c4f7b777"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.190696 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" event={"ID":"0dab040d-a74a-48f1-b2e5-fb2fe6de3b58","Type":"ContainerStarted","Data":"7235ff2fd07c427efb32f764c9343e5be641e86f88a797f56bbad15a314c077b"} Sep 30 10:00:43 crc kubenswrapper[4970]: I0930 10:00:43.192486 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" event={"ID":"9d9bdcb3-a944-4379-8dfd-858a022e946a","Type":"ContainerStarted","Data":"132d3600b46cc6d4abc88ce3e3bce1e938dae46297631952047a899585199fca"} Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.200811 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" event={"ID":"1b1a92f2-46aa-492c-906b-1b86c58ba818","Type":"ContainerStarted","Data":"196390ae16abe7b524268c9abd2cb53b76f36384f8dc297cf3a7f7bf838ef99e"} Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.204457 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" event={"ID":"7131ae21-9827-4028-9841-fbc480e7b938","Type":"ContainerStarted","Data":"dcefd62060ddc45dff71e20df18a14bb72de1b1dca9b9a7d5c50337c87ab7644"} Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.209637 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.209689 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.210686 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.210720 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.210738 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.211980 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.212063 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.215432 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.215554 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.215681 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.259416 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-c7c776c96-74r7d" podStartSLOduration=11.98588597 podStartE2EDuration="28.259386482s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.275070808 +0000 UTC m=+832.346921742" lastFinishedPulling="2025-09-30 10:00:35.54857132 +0000 UTC m=+848.620422254" observedRunningTime="2025-09-30 10:00:44.23719805 +0000 UTC m=+857.309049004" watchObservedRunningTime="2025-09-30 10:00:44.259386482 +0000 UTC m=+857.331237416" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.283179 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-7bcpp" podStartSLOduration=12.756197155 podStartE2EDuration="29.283156636s" podCreationTimestamp="2025-09-30 10:00:15 +0000 UTC" firstStartedPulling="2025-09-30 10:00:18.443492672 +0000 UTC m=+831.515343606" lastFinishedPulling="2025-09-30 10:00:34.970451993 +0000 UTC m=+848.042303087" observedRunningTime="2025-09-30 10:00:44.263968656 +0000 UTC m=+857.335819600" watchObservedRunningTime="2025-09-30 10:00:44.283156636 +0000 UTC m=+857.355007590" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.291748 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" podStartSLOduration=11.73349745 podStartE2EDuration="29.291717878s" podCreationTimestamp="2025-09-30 10:00:15 +0000 UTC" firstStartedPulling="2025-09-30 10:00:17.411839684 +0000 UTC m=+830.483690618" lastFinishedPulling="2025-09-30 10:00:34.970060112 +0000 UTC m=+848.041911046" observedRunningTime="2025-09-30 10:00:44.282693403 +0000 UTC m=+857.354544357" watchObservedRunningTime="2025-09-30 10:00:44.291717878 +0000 UTC m=+857.363568812" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.309516 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" podStartSLOduration=12.579859426 podStartE2EDuration="28.309476119s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.17182769 +0000 UTC m=+832.243678624" lastFinishedPulling="2025-09-30 10:00:34.901444363 +0000 UTC m=+847.973295317" observedRunningTime="2025-09-30 10:00:44.301833562 +0000 UTC m=+857.373684506" watchObservedRunningTime="2025-09-30 10:00:44.309476119 +0000 UTC m=+857.381327063" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.326841 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-76669f99c-gtp7j" podStartSLOduration=12.067911533 podStartE2EDuration="28.326806919s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.290520017 +0000 UTC m=+832.362370951" lastFinishedPulling="2025-09-30 10:00:35.549415393 +0000 UTC m=+848.621266337" observedRunningTime="2025-09-30 10:00:44.326624174 +0000 UTC m=+857.398475108" watchObservedRunningTime="2025-09-30 10:00:44.326806919 +0000 UTC m=+857.398657853" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.348387 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-rjk8q" podStartSLOduration=12.092603872 podStartE2EDuration="28.348363453s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.290527617 +0000 UTC m=+832.362378551" lastFinishedPulling="2025-09-30 10:00:35.546287198 +0000 UTC m=+848.618138132" observedRunningTime="2025-09-30 10:00:44.340307345 +0000 UTC m=+857.412158279" watchObservedRunningTime="2025-09-30 10:00:44.348363453 +0000 UTC m=+857.420214387" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.369959 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6d776955-gnxmq" podStartSLOduration=12.980395381 podStartE2EDuration="28.369926737s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.582432448 +0000 UTC m=+832.654283382" lastFinishedPulling="2025-09-30 10:00:34.971963804 +0000 UTC m=+848.043814738" observedRunningTime="2025-09-30 10:00:44.366205746 +0000 UTC m=+857.438056680" watchObservedRunningTime="2025-09-30 10:00:44.369926737 +0000 UTC m=+857.441777681" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.396045 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" podStartSLOduration=12.693492377 podStartE2EDuration="28.396026665s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.273094295 +0000 UTC m=+832.344945229" lastFinishedPulling="2025-09-30 10:00:34.975628583 +0000 UTC m=+848.047479517" observedRunningTime="2025-09-30 10:00:44.395144331 +0000 UTC m=+857.466995285" watchObservedRunningTime="2025-09-30 10:00:44.396026665 +0000 UTC m=+857.467877589" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.418933 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" podStartSLOduration=11.939281328 podStartE2EDuration="29.418911395s" podCreationTimestamp="2025-09-30 10:00:15 +0000 UTC" firstStartedPulling="2025-09-30 10:00:17.491220086 +0000 UTC m=+830.563071020" lastFinishedPulling="2025-09-30 10:00:34.970850153 +0000 UTC m=+848.042701087" observedRunningTime="2025-09-30 10:00:44.415719708 +0000 UTC m=+857.487570652" watchObservedRunningTime="2025-09-30 10:00:44.418911395 +0000 UTC m=+857.490762329" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.511332 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" podStartSLOduration=12.83128082 podStartE2EDuration="28.511308879s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.290095405 +0000 UTC m=+832.361946339" lastFinishedPulling="2025-09-30 10:00:34.970123464 +0000 UTC m=+848.041974398" observedRunningTime="2025-09-30 10:00:44.508500402 +0000 UTC m=+857.580351527" watchObservedRunningTime="2025-09-30 10:00:44.511308879 +0000 UTC m=+857.583159813" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.610488 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" podStartSLOduration=12.353715718 podStartE2EDuration="28.610461766s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.290897527 +0000 UTC m=+832.362748461" lastFinishedPulling="2025-09-30 10:00:35.547643575 +0000 UTC m=+848.619494509" observedRunningTime="2025-09-30 10:00:44.561410626 +0000 UTC m=+857.633261560" watchObservedRunningTime="2025-09-30 10:00:44.610461766 +0000 UTC m=+857.682312700" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.704661 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" podStartSLOduration=12.304538364999999 podStartE2EDuration="28.704630498s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:18.571388828 +0000 UTC m=+831.643239762" lastFinishedPulling="2025-09-30 10:00:34.971480961 +0000 UTC m=+848.043331895" observedRunningTime="2025-09-30 10:00:44.665299052 +0000 UTC m=+857.737149986" watchObservedRunningTime="2025-09-30 10:00:44.704630498 +0000 UTC m=+857.776481442" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.715455 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" podStartSLOduration=12.842140344 podStartE2EDuration="28.715422539s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.098773131 +0000 UTC m=+832.170624065" lastFinishedPulling="2025-09-30 10:00:34.972055326 +0000 UTC m=+848.043906260" observedRunningTime="2025-09-30 10:00:44.702390517 +0000 UTC m=+857.774241451" watchObservedRunningTime="2025-09-30 10:00:44.715422539 +0000 UTC m=+857.787273473" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.739553 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" podStartSLOduration=12.082124027 podStartE2EDuration="28.739526172s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.487494515 +0000 UTC m=+832.559345449" lastFinishedPulling="2025-09-30 10:00:36.14489666 +0000 UTC m=+849.216747594" observedRunningTime="2025-09-30 10:00:44.734391803 +0000 UTC m=+857.806242747" watchObservedRunningTime="2025-09-30 10:00:44.739526172 +0000 UTC m=+857.811377116" Sep 30 10:00:44 crc kubenswrapper[4970]: I0930 10:00:44.780915 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" podStartSLOduration=13.274829829 podStartE2EDuration="29.780884363s" podCreationTimestamp="2025-09-30 10:00:15 +0000 UTC" firstStartedPulling="2025-09-30 10:00:18.465282093 +0000 UTC m=+831.537133017" lastFinishedPulling="2025-09-30 10:00:34.971336627 +0000 UTC m=+848.043187551" observedRunningTime="2025-09-30 10:00:44.760581223 +0000 UTC m=+857.832432157" watchObservedRunningTime="2025-09-30 10:00:44.780884363 +0000 UTC m=+857.852735297" Sep 30 10:00:45 crc kubenswrapper[4970]: I0930 10:00:45.211919 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:45 crc kubenswrapper[4970]: I0930 10:00:45.214850 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" Sep 30 10:00:45 crc kubenswrapper[4970]: I0930 10:00:45.234035 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-644bddb6d8-ws6gj" podStartSLOduration=13.217116265 podStartE2EDuration="30.234012593s" podCreationTimestamp="2025-09-30 10:00:15 +0000 UTC" firstStartedPulling="2025-09-30 10:00:17.885515981 +0000 UTC m=+830.957366915" lastFinishedPulling="2025-09-30 10:00:34.902412309 +0000 UTC m=+847.974263243" observedRunningTime="2025-09-30 10:00:45.231409612 +0000 UTC m=+858.303260546" watchObservedRunningTime="2025-09-30 10:00:45.234012593 +0000 UTC m=+858.305863527" Sep 30 10:00:45 crc kubenswrapper[4970]: I0930 10:00:45.276564 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" podStartSLOduration=13.608491682 podStartE2EDuration="29.276540166s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.303600151 +0000 UTC m=+832.375451075" lastFinishedPulling="2025-09-30 10:00:34.971648625 +0000 UTC m=+848.043499559" observedRunningTime="2025-09-30 10:00:45.263303847 +0000 UTC m=+858.335154781" watchObservedRunningTime="2025-09-30 10:00:45.276540166 +0000 UTC m=+858.348391100" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.212684 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.217109 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-fdjgr" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.320201 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.322323 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-pf2ph" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.400709 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.404416 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-ckjvw" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.611775 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.614511 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-7975b88857-js7xj" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.658765 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.660754 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-vjpqd" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.743046 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.746665 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-88c7-vwkw2" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.809953 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:46 crc kubenswrapper[4970]: I0930 10:00:46.814341 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-64d7b59854-8m95z" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.031454 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.033708 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-ss9vs" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.079494 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.088530 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7d857cc749-svx8h" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.167075 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.169354 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-76fcc6dc7c-p9fxz" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.260113 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" event={"ID":"5cfa1456-1b45-4385-8fc5-27dccef45958","Type":"ContainerStarted","Data":"9e32da5fe2d25907893ea9fcd8365bf22aaf46a26ea295553f241947d49abc6f"} Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.263880 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" event={"ID":"7f9f19d7-d284-4757-94a1-1a86a8f28b17","Type":"ContainerStarted","Data":"68706feb4181d85f7db121136b07e53e1c9e8a837c00711b0d859f3bbbdaf466"} Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.269475 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.269538 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.269738 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-bc7dc7bd9-v5qrd" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.292257 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" podStartSLOduration=4.7886633960000005 podStartE2EDuration="31.292237671s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.478759178 +0000 UTC m=+832.550610112" lastFinishedPulling="2025-09-30 10:00:45.982333423 +0000 UTC m=+859.054184387" observedRunningTime="2025-09-30 10:00:47.287128893 +0000 UTC m=+860.358979827" watchObservedRunningTime="2025-09-30 10:00:47.292237671 +0000 UTC m=+860.364088605" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.310699 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" podStartSLOduration=4.664083301 podStartE2EDuration="31.310677631s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.355841027 +0000 UTC m=+832.427691961" lastFinishedPulling="2025-09-30 10:00:46.002435317 +0000 UTC m=+859.074286291" observedRunningTime="2025-09-30 10:00:47.307812563 +0000 UTC m=+860.379663507" watchObservedRunningTime="2025-09-30 10:00:47.310677631 +0000 UTC m=+860.382528565" Sep 30 10:00:47 crc kubenswrapper[4970]: I0930 10:00:47.406027 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:00:49 crc kubenswrapper[4970]: I0930 10:00:49.284666 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" event={"ID":"116a4b20-5a9a-4456-8816-637e0740a792","Type":"ContainerStarted","Data":"998b7dc8b7a3e1eeef758531c7a57e169db56ee436ab4f5612d9cac93170e3ee"} Sep 30 10:00:49 crc kubenswrapper[4970]: I0930 10:00:49.285569 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:49 crc kubenswrapper[4970]: I0930 10:00:49.310319 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" podStartSLOduration=4.194373791 podStartE2EDuration="33.31029288s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.09837114 +0000 UTC m=+832.170222074" lastFinishedPulling="2025-09-30 10:00:48.214290229 +0000 UTC m=+861.286141163" observedRunningTime="2025-09-30 10:00:49.305297584 +0000 UTC m=+862.377148538" watchObservedRunningTime="2025-09-30 10:00:49.31029288 +0000 UTC m=+862.382143814" Sep 30 10:00:53 crc kubenswrapper[4970]: I0930 10:00:53.316083 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" event={"ID":"7f744173-6696-4797-a55c-85b498bff4da","Type":"ContainerStarted","Data":"d7cea439d1e29cfd612a601b07eab20031ef6920319b805330527b01e65a0997"} Sep 30 10:00:53 crc kubenswrapper[4970]: I0930 10:00:53.318695 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" event={"ID":"b611fd3e-a529-4c90-8e81-c7352004d62f","Type":"ContainerStarted","Data":"25dc81020f61b78b10e7082d58b73db0acbd2ba3ff8e2dd0b414031f6f2ae6cc"} Sep 30 10:00:53 crc kubenswrapper[4970]: I0930 10:00:53.319079 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:00:53 crc kubenswrapper[4970]: I0930 10:00:53.348847 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-79d8469568-kpw26" podStartSLOduration=4.446483703 podStartE2EDuration="37.348808813s" podCreationTimestamp="2025-09-30 10:00:16 +0000 UTC" firstStartedPulling="2025-09-30 10:00:19.342272639 +0000 UTC m=+832.414123573" lastFinishedPulling="2025-09-30 10:00:52.244597749 +0000 UTC m=+865.316448683" observedRunningTime="2025-09-30 10:00:53.341962447 +0000 UTC m=+866.413813391" watchObservedRunningTime="2025-09-30 10:00:53.348808813 +0000 UTC m=+866.420659787" Sep 30 10:00:53 crc kubenswrapper[4970]: I0930 10:00:53.369255 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" podStartSLOduration=4.091790791 podStartE2EDuration="38.369213526s" podCreationTimestamp="2025-09-30 10:00:15 +0000 UTC" firstStartedPulling="2025-09-30 10:00:17.884775221 +0000 UTC m=+830.956626155" lastFinishedPulling="2025-09-30 10:00:52.162197956 +0000 UTC m=+865.234048890" observedRunningTime="2025-09-30 10:00:53.366669627 +0000 UTC m=+866.438520571" watchObservedRunningTime="2025-09-30 10:00:53.369213526 +0000 UTC m=+866.441064500" Sep 30 10:00:57 crc kubenswrapper[4970]: I0930 10:00:57.274221 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-s6hpz" Sep 30 10:00:57 crc kubenswrapper[4970]: I0930 10:00:57.374554 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-bdcpn" Sep 30 10:00:57 crc kubenswrapper[4970]: I0930 10:00:57.411139 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-f66b554c6-sfggm" Sep 30 10:01:06 crc kubenswrapper[4970]: I0930 10:01:06.353487 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-q2llj" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.421526 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kxvhd"] Sep 30 10:01:12 crc kubenswrapper[4970]: E0930 10:01:12.423029 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="extract-content" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423063 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="extract-content" Sep 30 10:01:12 crc kubenswrapper[4970]: E0930 10:01:12.423131 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="extract-utilities" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423149 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="extract-utilities" Sep 30 10:01:12 crc kubenswrapper[4970]: E0930 10:01:12.423194 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="extract-content" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423211 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="extract-content" Sep 30 10:01:12 crc kubenswrapper[4970]: E0930 10:01:12.423228 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="registry-server" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423245 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="registry-server" Sep 30 10:01:12 crc kubenswrapper[4970]: E0930 10:01:12.423301 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="extract-utilities" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423318 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="extract-utilities" Sep 30 10:01:12 crc kubenswrapper[4970]: E0930 10:01:12.423355 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="registry-server" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423372 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="registry-server" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423742 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b7d0772-1c7d-443b-a9f7-fa64461d84bc" containerName="registry-server" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.423769 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec858bf1-5d08-410d-b66e-4bb2e6c240b9" containerName="registry-server" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.426297 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.431124 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxvhd"] Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.515464 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca1e80b-4b87-493b-9024-c063fc5fa638-utilities\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.515519 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsj5x\" (UniqueName: \"kubernetes.io/projected/aca1e80b-4b87-493b-9024-c063fc5fa638-kube-api-access-bsj5x\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.515550 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca1e80b-4b87-493b-9024-c063fc5fa638-catalog-content\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.616940 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca1e80b-4b87-493b-9024-c063fc5fa638-utilities\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.617189 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsj5x\" (UniqueName: \"kubernetes.io/projected/aca1e80b-4b87-493b-9024-c063fc5fa638-kube-api-access-bsj5x\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.617207 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca1e80b-4b87-493b-9024-c063fc5fa638-catalog-content\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.617630 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aca1e80b-4b87-493b-9024-c063fc5fa638-utilities\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.617640 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aca1e80b-4b87-493b-9024-c063fc5fa638-catalog-content\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.640970 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsj5x\" (UniqueName: \"kubernetes.io/projected/aca1e80b-4b87-493b-9024-c063fc5fa638-kube-api-access-bsj5x\") pod \"certified-operators-kxvhd\" (UID: \"aca1e80b-4b87-493b-9024-c063fc5fa638\") " pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:12 crc kubenswrapper[4970]: I0930 10:01:12.750081 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:13 crc kubenswrapper[4970]: W0930 10:01:13.261177 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaca1e80b_4b87_493b_9024_c063fc5fa638.slice/crio-ecb4d41e6152853ad81a89e83bc540506bc8f8aeaaa60cfeb4364a35df4517ca WatchSource:0}: Error finding container ecb4d41e6152853ad81a89e83bc540506bc8f8aeaaa60cfeb4364a35df4517ca: Status 404 returned error can't find the container with id ecb4d41e6152853ad81a89e83bc540506bc8f8aeaaa60cfeb4364a35df4517ca Sep 30 10:01:13 crc kubenswrapper[4970]: I0930 10:01:13.266095 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxvhd"] Sep 30 10:01:13 crc kubenswrapper[4970]: I0930 10:01:13.508620 4970 generic.go:334] "Generic (PLEG): container finished" podID="aca1e80b-4b87-493b-9024-c063fc5fa638" containerID="6d7741017fb5d23f114b121256be9a23e48f1aeeaf294a6e9cd6e3989c6ee621" exitCode=0 Sep 30 10:01:13 crc kubenswrapper[4970]: I0930 10:01:13.508758 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxvhd" event={"ID":"aca1e80b-4b87-493b-9024-c063fc5fa638","Type":"ContainerDied","Data":"6d7741017fb5d23f114b121256be9a23e48f1aeeaf294a6e9cd6e3989c6ee621"} Sep 30 10:01:13 crc kubenswrapper[4970]: I0930 10:01:13.509040 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxvhd" event={"ID":"aca1e80b-4b87-493b-9024-c063fc5fa638","Type":"ContainerStarted","Data":"ecb4d41e6152853ad81a89e83bc540506bc8f8aeaaa60cfeb4364a35df4517ca"} Sep 30 10:01:17 crc kubenswrapper[4970]: I0930 10:01:17.569379 4970 generic.go:334] "Generic (PLEG): container finished" podID="aca1e80b-4b87-493b-9024-c063fc5fa638" containerID="821f3c98016d6e816b7c12ebb85222b61e0e373beea2ac8079faf8cc01501ea5" exitCode=0 Sep 30 10:01:17 crc kubenswrapper[4970]: I0930 10:01:17.569517 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxvhd" event={"ID":"aca1e80b-4b87-493b-9024-c063fc5fa638","Type":"ContainerDied","Data":"821f3c98016d6e816b7c12ebb85222b61e0e373beea2ac8079faf8cc01501ea5"} Sep 30 10:01:19 crc kubenswrapper[4970]: I0930 10:01:19.587928 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kxvhd" event={"ID":"aca1e80b-4b87-493b-9024-c063fc5fa638","Type":"ContainerStarted","Data":"98c5a898d350e447cfa921d8274d23ad1f721a7125a8b00e5a270ea788da3451"} Sep 30 10:01:19 crc kubenswrapper[4970]: I0930 10:01:19.611111 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kxvhd" podStartSLOduration=3.103414441 podStartE2EDuration="7.611085539s" podCreationTimestamp="2025-09-30 10:01:12 +0000 UTC" firstStartedPulling="2025-09-30 10:01:13.510619936 +0000 UTC m=+886.582470860" lastFinishedPulling="2025-09-30 10:01:18.018291014 +0000 UTC m=+891.090141958" observedRunningTime="2025-09-30 10:01:19.608475488 +0000 UTC m=+892.680326422" watchObservedRunningTime="2025-09-30 10:01:19.611085539 +0000 UTC m=+892.682936473" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.513923 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-287w7"] Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.517623 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.521436 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-ncnt5" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.521598 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.521723 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.525294 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.534222 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-287w7"] Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.572351 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldcf8\" (UniqueName: \"kubernetes.io/projected/97bb6848-3a5f-49b5-90db-43e76990b684-kube-api-access-ldcf8\") pod \"dnsmasq-dns-675f4bcbfc-287w7\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.572441 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb6848-3a5f-49b5-90db-43e76990b684-config\") pod \"dnsmasq-dns-675f4bcbfc-287w7\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.637135 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fvzkh"] Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.638624 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.643029 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.652312 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fvzkh"] Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.675785 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldcf8\" (UniqueName: \"kubernetes.io/projected/97bb6848-3a5f-49b5-90db-43e76990b684-kube-api-access-ldcf8\") pod \"dnsmasq-dns-675f4bcbfc-287w7\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.675942 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb6848-3a5f-49b5-90db-43e76990b684-config\") pod \"dnsmasq-dns-675f4bcbfc-287w7\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.675980 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dxgz\" (UniqueName: \"kubernetes.io/projected/1aeaa60a-eceb-45a4-b439-6a3197791a77-kube-api-access-7dxgz\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.676132 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.676227 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-config\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.677140 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb6848-3a5f-49b5-90db-43e76990b684-config\") pod \"dnsmasq-dns-675f4bcbfc-287w7\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.700805 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldcf8\" (UniqueName: \"kubernetes.io/projected/97bb6848-3a5f-49b5-90db-43e76990b684-kube-api-access-ldcf8\") pod \"dnsmasq-dns-675f4bcbfc-287w7\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.778021 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dxgz\" (UniqueName: \"kubernetes.io/projected/1aeaa60a-eceb-45a4-b439-6a3197791a77-kube-api-access-7dxgz\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.778131 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.778167 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-config\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.779409 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.779572 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-config\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.805691 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dxgz\" (UniqueName: \"kubernetes.io/projected/1aeaa60a-eceb-45a4-b439-6a3197791a77-kube-api-access-7dxgz\") pod \"dnsmasq-dns-78dd6ddcc-fvzkh\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.839826 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:21 crc kubenswrapper[4970]: I0930 10:01:21.959001 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.139037 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-287w7"] Sep 30 10:01:22 crc kubenswrapper[4970]: W0930 10:01:22.148183 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97bb6848_3a5f_49b5_90db_43e76990b684.slice/crio-c988817042efd91796d2d34ec0b9d7147673cde212f9b1953402963cc82e1fe5 WatchSource:0}: Error finding container c988817042efd91796d2d34ec0b9d7147673cde212f9b1953402963cc82e1fe5: Status 404 returned error can't find the container with id c988817042efd91796d2d34ec0b9d7147673cde212f9b1953402963cc82e1fe5 Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.154343 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.516028 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fvzkh"] Sep 30 10:01:22 crc kubenswrapper[4970]: W0930 10:01:22.524784 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1aeaa60a_eceb_45a4_b439_6a3197791a77.slice/crio-470b1da91a1d6e8fa383ebf7bf254ed8ac90ca31b2d929da83f06b10b8085554 WatchSource:0}: Error finding container 470b1da91a1d6e8fa383ebf7bf254ed8ac90ca31b2d929da83f06b10b8085554: Status 404 returned error can't find the container with id 470b1da91a1d6e8fa383ebf7bf254ed8ac90ca31b2d929da83f06b10b8085554 Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.618598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" event={"ID":"1aeaa60a-eceb-45a4-b439-6a3197791a77","Type":"ContainerStarted","Data":"470b1da91a1d6e8fa383ebf7bf254ed8ac90ca31b2d929da83f06b10b8085554"} Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.619919 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" event={"ID":"97bb6848-3a5f-49b5-90db-43e76990b684","Type":"ContainerStarted","Data":"c988817042efd91796d2d34ec0b9d7147673cde212f9b1953402963cc82e1fe5"} Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.750846 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.750939 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:22 crc kubenswrapper[4970]: I0930 10:01:22.834310 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:24 crc kubenswrapper[4970]: I0930 10:01:24.881146 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-287w7"] Sep 30 10:01:24 crc kubenswrapper[4970]: I0930 10:01:24.923160 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-75xl5"] Sep 30 10:01:24 crc kubenswrapper[4970]: I0930 10:01:24.924519 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:24 crc kubenswrapper[4970]: I0930 10:01:24.950652 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-75xl5"] Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.060858 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-config\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.060933 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgj9\" (UniqueName: \"kubernetes.io/projected/f63c6492-5afc-47b4-865c-7f2a1de471c0-kube-api-access-wmgj9\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.061008 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.162341 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-config\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.162482 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgj9\" (UniqueName: \"kubernetes.io/projected/f63c6492-5afc-47b4-865c-7f2a1de471c0-kube-api-access-wmgj9\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.163115 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.163419 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-config\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.164160 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.199530 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgj9\" (UniqueName: \"kubernetes.io/projected/f63c6492-5afc-47b4-865c-7f2a1de471c0-kube-api-access-wmgj9\") pod \"dnsmasq-dns-5ccc8479f9-75xl5\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.257101 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.292948 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fvzkh"] Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.327492 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6gn5k"] Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.328882 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.366547 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6gn5k"] Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.488320 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.488382 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvbcn\" (UniqueName: \"kubernetes.io/projected/8cc17151-a37a-4aea-91c4-02211a139b5d-kube-api-access-rvbcn\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.488417 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-config\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.589373 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvbcn\" (UniqueName: \"kubernetes.io/projected/8cc17151-a37a-4aea-91c4-02211a139b5d-kube-api-access-rvbcn\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.589885 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-config\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.590026 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.591394 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.593105 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-config\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.613668 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvbcn\" (UniqueName: \"kubernetes.io/projected/8cc17151-a37a-4aea-91c4-02211a139b5d-kube-api-access-rvbcn\") pod \"dnsmasq-dns-57d769cc4f-6gn5k\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.693247 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-75xl5"] Sep 30 10:01:25 crc kubenswrapper[4970]: I0930 10:01:25.702626 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.141814 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.146921 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.152403 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.152597 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.152952 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.153254 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8mc6k" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.153752 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.163875 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.164335 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.164409 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.252677 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6gn5k"] Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311753 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311815 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311848 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311872 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311901 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-226zq\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-kube-api-access-226zq\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311956 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0b2bd53-a874-4d92-b137-24f772f5d61c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.311974 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0b2bd53-a874-4d92-b137-24f772f5d61c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.312151 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.312170 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.312207 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414338 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414417 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414448 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-226zq\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-kube-api-access-226zq\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414494 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0b2bd53-a874-4d92-b137-24f772f5d61c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0b2bd53-a874-4d92-b137-24f772f5d61c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414632 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414676 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414740 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414825 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414855 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.414880 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.415114 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.415793 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.415859 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.416125 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.419140 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.421904 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.428542 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0b2bd53-a874-4d92-b137-24f772f5d61c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.432730 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.436349 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0b2bd53-a874-4d92-b137-24f772f5d61c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.437532 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-226zq\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-kube-api-access-226zq\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.440937 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.449331 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.476257 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.477663 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.487584 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.489333 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.489840 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rgpx8" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.490044 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.490170 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.491413 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.491496 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.491640 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.495229 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.618942 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kpp5\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-kube-api-access-4kpp5\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619028 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619058 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7bc5f72b-8b51-4a55-971a-83135118e627-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619089 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619140 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619177 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619316 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619530 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619590 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619624 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-config-data\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.619656 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7bc5f72b-8b51-4a55-971a-83135118e627-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.687223 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" event={"ID":"f63c6492-5afc-47b4-865c-7f2a1de471c0","Type":"ContainerStarted","Data":"45837baf2a0de2d74cca94097fb389a1bae8c01e9896d69d1fc23bca5fc8439d"} Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7bc5f72b-8b51-4a55-971a-83135118e627-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721304 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721333 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721358 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721413 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721480 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721520 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-config-data\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721563 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7bc5f72b-8b51-4a55-971a-83135118e627-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.721607 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kpp5\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-kube-api-access-4kpp5\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.722443 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.724247 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.725304 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-config-data\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.725593 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.726119 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.727161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.731336 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.732146 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.732869 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7bc5f72b-8b51-4a55-971a-83135118e627-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.733842 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7bc5f72b-8b51-4a55-971a-83135118e627-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.741586 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kpp5\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-kube-api-access-4kpp5\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.766115 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " pod="openstack/rabbitmq-server-0" Sep 30 10:01:26 crc kubenswrapper[4970]: I0930 10:01:26.806326 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.958386 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.959741 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.963525 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.964262 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-rmmpl" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.964777 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.964942 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.965369 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Sep 30 10:01:27 crc kubenswrapper[4970]: I0930 10:01:27.970287 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.012200 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.058604 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dee5bc19-bb45-4962-adaf-ff6561817272-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.058666 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-kolla-config\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.058694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc552\" (UniqueName: \"kubernetes.io/projected/dee5bc19-bb45-4962-adaf-ff6561817272-kube-api-access-fc552\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.058964 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-config-data-default\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.059057 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.059105 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.059157 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.059197 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-secrets\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.059336 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160483 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-config-data-default\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160535 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160572 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160598 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160624 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-secrets\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160678 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160700 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dee5bc19-bb45-4962-adaf-ff6561817272-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160718 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-kolla-config\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.160735 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc552\" (UniqueName: \"kubernetes.io/projected/dee5bc19-bb45-4962-adaf-ff6561817272-kube-api-access-fc552\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.161874 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-config-data-default\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.162768 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-operator-scripts\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.163102 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/dee5bc19-bb45-4962-adaf-ff6561817272-config-data-generated\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.163305 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.168126 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/dee5bc19-bb45-4962-adaf-ff6561817272-kolla-config\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.171566 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-secrets\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.171601 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.186097 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc552\" (UniqueName: \"kubernetes.io/projected/dee5bc19-bb45-4962-adaf-ff6561817272-kube-api-access-fc552\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.188597 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dee5bc19-bb45-4962-adaf-ff6561817272-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.198537 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"openstack-galera-0\" (UID: \"dee5bc19-bb45-4962-adaf-ff6561817272\") " pod="openstack/openstack-galera-0" Sep 30 10:01:28 crc kubenswrapper[4970]: I0930 10:01:28.289136 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.220441 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.222705 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.225893 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.226233 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-9jg2c" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.226414 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.229255 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.244407 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280476 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280535 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280639 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280696 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280731 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tlp\" (UniqueName: \"kubernetes.io/projected/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-kube-api-access-68tlp\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280783 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280833 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280857 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.280885 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386195 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386261 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386325 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386376 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386410 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tlp\" (UniqueName: \"kubernetes.io/projected/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-kube-api-access-68tlp\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386454 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386494 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386519 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.386539 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.387582 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.389497 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.392497 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.393332 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.393580 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.395793 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.404570 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.412714 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.415402 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.417747 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.423187 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.423459 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-ffd5t" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.423568 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tlp\" (UniqueName: \"kubernetes.io/projected/60d8ffcf-dc53-4fac-92a0-64136b4b0d4b-kube-api-access-68tlp\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.423647 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b\") " pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.424664 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.429691 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.489133 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgq6\" (UniqueName: \"kubernetes.io/projected/d524179d-ea87-48d3-b87f-da18d0a059c8-kube-api-access-6xgq6\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.489171 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d524179d-ea87-48d3-b87f-da18d0a059c8-kolla-config\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.489229 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d524179d-ea87-48d3-b87f-da18d0a059c8-config-data\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.490749 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d524179d-ea87-48d3-b87f-da18d0a059c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.490797 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524179d-ea87-48d3-b87f-da18d0a059c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.543727 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.592412 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d524179d-ea87-48d3-b87f-da18d0a059c8-config-data\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.592496 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d524179d-ea87-48d3-b87f-da18d0a059c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.592556 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524179d-ea87-48d3-b87f-da18d0a059c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.592722 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgq6\" (UniqueName: \"kubernetes.io/projected/d524179d-ea87-48d3-b87f-da18d0a059c8-kube-api-access-6xgq6\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.592772 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d524179d-ea87-48d3-b87f-da18d0a059c8-kolla-config\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.593846 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d524179d-ea87-48d3-b87f-da18d0a059c8-kolla-config\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.594728 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d524179d-ea87-48d3-b87f-da18d0a059c8-config-data\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.595841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d524179d-ea87-48d3-b87f-da18d0a059c8-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.597309 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d524179d-ea87-48d3-b87f-da18d0a059c8-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.620798 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgq6\" (UniqueName: \"kubernetes.io/projected/d524179d-ea87-48d3-b87f-da18d0a059c8-kube-api-access-6xgq6\") pod \"memcached-0\" (UID: \"d524179d-ea87-48d3-b87f-da18d0a059c8\") " pod="openstack/memcached-0" Sep 30 10:01:29 crc kubenswrapper[4970]: I0930 10:01:29.792350 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.343697 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.345279 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.347804 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-5mb6m" Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.383166 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.423620 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547jf\" (UniqueName: \"kubernetes.io/projected/15b0c3a5-a622-4a16-aac7-b807588c48a7-kube-api-access-547jf\") pod \"kube-state-metrics-0\" (UID: \"15b0c3a5-a622-4a16-aac7-b807588c48a7\") " pod="openstack/kube-state-metrics-0" Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.525361 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-547jf\" (UniqueName: \"kubernetes.io/projected/15b0c3a5-a622-4a16-aac7-b807588c48a7-kube-api-access-547jf\") pod \"kube-state-metrics-0\" (UID: \"15b0c3a5-a622-4a16-aac7-b807588c48a7\") " pod="openstack/kube-state-metrics-0" Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.547244 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-547jf\" (UniqueName: \"kubernetes.io/projected/15b0c3a5-a622-4a16-aac7-b807588c48a7-kube-api-access-547jf\") pod \"kube-state-metrics-0\" (UID: \"15b0c3a5-a622-4a16-aac7-b807588c48a7\") " pod="openstack/kube-state-metrics-0" Sep 30 10:01:31 crc kubenswrapper[4970]: I0930 10:01:31.668903 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 10:01:32 crc kubenswrapper[4970]: I0930 10:01:32.738749 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" event={"ID":"8cc17151-a37a-4aea-91c4-02211a139b5d","Type":"ContainerStarted","Data":"3c09be5307a0436adb635a405fa337c01fbacd59cfc54aca9c2d8c6b05b032bd"} Sep 30 10:01:32 crc kubenswrapper[4970]: I0930 10:01:32.846824 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kxvhd" Sep 30 10:01:32 crc kubenswrapper[4970]: I0930 10:01:32.984578 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kxvhd"] Sep 30 10:01:33 crc kubenswrapper[4970]: I0930 10:01:33.058874 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wwzc"] Sep 30 10:01:33 crc kubenswrapper[4970]: I0930 10:01:33.059188 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wwzc" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="registry-server" containerID="cri-o://5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a" gracePeriod=2 Sep 30 10:01:33 crc kubenswrapper[4970]: I0930 10:01:33.748722 4970 generic.go:334] "Generic (PLEG): container finished" podID="4281f20f-ca65-49c5-9217-b9a730147510" containerID="5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a" exitCode=0 Sep 30 10:01:33 crc kubenswrapper[4970]: I0930 10:01:33.749084 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwzc" event={"ID":"4281f20f-ca65-49c5-9217-b9a730147510","Type":"ContainerDied","Data":"5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a"} Sep 30 10:01:33 crc kubenswrapper[4970]: E0930 10:01:33.809016 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a is running failed: container process not found" containerID="5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 10:01:33 crc kubenswrapper[4970]: E0930 10:01:33.812687 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a is running failed: container process not found" containerID="5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 10:01:33 crc kubenswrapper[4970]: E0930 10:01:33.814186 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a is running failed: container process not found" containerID="5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a" cmd=["grpc_health_probe","-addr=:50051"] Sep 30 10:01:33 crc kubenswrapper[4970]: E0930 10:01:33.814231 4970 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-5wwzc" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="registry-server" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.638445 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vtdnt"] Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.639623 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.655490 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-sjztd" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.657295 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.657682 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.674815 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vtdnt"] Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.686873 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8f06d0-75e0-4ed8-9e37-086886b019e5-ovn-controller-tls-certs\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.686938 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9w7l\" (UniqueName: \"kubernetes.io/projected/ea8f06d0-75e0-4ed8-9e37-086886b019e5-kube-api-access-z9w7l\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.686974 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-run\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.687014 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-run-ovn\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.687044 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-log-ovn\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.687073 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8f06d0-75e0-4ed8-9e37-086886b019e5-combined-ca-bundle\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.687098 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f06d0-75e0-4ed8-9e37-086886b019e5-scripts\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.708166 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-h8572"] Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.710908 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.749410 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h8572"] Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.788617 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-lib\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.788680 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-log-ovn\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.788699 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0baee2b3-0d8f-4586-a636-c452b0d541d9-scripts\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.788727 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8f06d0-75e0-4ed8-9e37-086886b019e5-combined-ca-bundle\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.788748 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-log\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.788836 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f06d0-75e0-4ed8-9e37-086886b019e5-scripts\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789015 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbgg\" (UniqueName: \"kubernetes.io/projected/0baee2b3-0d8f-4586-a636-c452b0d541d9-kube-api-access-kwbgg\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789049 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-run\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789145 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-etc-ovs\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789186 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8f06d0-75e0-4ed8-9e37-086886b019e5-ovn-controller-tls-certs\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9w7l\" (UniqueName: \"kubernetes.io/projected/ea8f06d0-75e0-4ed8-9e37-086886b019e5-kube-api-access-z9w7l\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789261 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-run\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789283 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-run-ovn\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.789592 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-run-ovn\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.790077 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-run\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.796123 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ea8f06d0-75e0-4ed8-9e37-086886b019e5-ovn-controller-tls-certs\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.807210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9w7l\" (UniqueName: \"kubernetes.io/projected/ea8f06d0-75e0-4ed8-9e37-086886b019e5-kube-api-access-z9w7l\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.812840 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea8f06d0-75e0-4ed8-9e37-086886b019e5-combined-ca-bundle\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.845183 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ea8f06d0-75e0-4ed8-9e37-086886b019e5-var-log-ovn\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.856439 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ea8f06d0-75e0-4ed8-9e37-086886b019e5-scripts\") pod \"ovn-controller-vtdnt\" (UID: \"ea8f06d0-75e0-4ed8-9e37-086886b019e5\") " pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.904953 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbgg\" (UniqueName: \"kubernetes.io/projected/0baee2b3-0d8f-4586-a636-c452b0d541d9-kube-api-access-kwbgg\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905026 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-run\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905088 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-etc-ovs\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905168 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-lib\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905300 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-run\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905554 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-etc-ovs\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905684 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-lib\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905785 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0baee2b3-0d8f-4586-a636-c452b0d541d9-scripts\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.905866 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-log\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.906034 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0baee2b3-0d8f-4586-a636-c452b0d541d9-var-log\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.907763 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0baee2b3-0d8f-4586-a636-c452b0d541d9-scripts\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.926009 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbgg\" (UniqueName: \"kubernetes.io/projected/0baee2b3-0d8f-4586-a636-c452b0d541d9-kube-api-access-kwbgg\") pod \"ovn-controller-ovs-h8572\" (UID: \"0baee2b3-0d8f-4586-a636-c452b0d541d9\") " pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:34 crc kubenswrapper[4970]: I0930 10:01:34.976781 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.039573 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.052232 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.054103 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.058476 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.058716 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.059119 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.059275 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.059419 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-96jcj" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.074530 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109747 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109822 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23d91298-5a5e-428e-afe3-f5625b74f3e0-config\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109856 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23d91298-5a5e-428e-afe3-f5625b74f3e0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109884 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109912 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109930 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/23d91298-5a5e-428e-afe3-f5625b74f3e0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.109954 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.110005 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwxcw\" (UniqueName: \"kubernetes.io/projected/23d91298-5a5e-428e-afe3-f5625b74f3e0-kube-api-access-jwxcw\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.212889 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.212972 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23d91298-5a5e-428e-afe3-f5625b74f3e0-config\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.213032 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23d91298-5a5e-428e-afe3-f5625b74f3e0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.213069 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.213116 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/23d91298-5a5e-428e-afe3-f5625b74f3e0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.213145 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.213181 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.213222 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwxcw\" (UniqueName: \"kubernetes.io/projected/23d91298-5a5e-428e-afe3-f5625b74f3e0-kube-api-access-jwxcw\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.214081 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.215316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/23d91298-5a5e-428e-afe3-f5625b74f3e0-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.216158 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/23d91298-5a5e-428e-afe3-f5625b74f3e0-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.216208 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23d91298-5a5e-428e-afe3-f5625b74f3e0-config\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.218044 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.227915 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.233488 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/23d91298-5a5e-428e-afe3-f5625b74f3e0-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.237513 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwxcw\" (UniqueName: \"kubernetes.io/projected/23d91298-5a5e-428e-afe3-f5625b74f3e0-kube-api-access-jwxcw\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.260464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"ovsdbserver-nb-0\" (UID: \"23d91298-5a5e-428e-afe3-f5625b74f3e0\") " pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:35 crc kubenswrapper[4970]: I0930 10:01:35.373855 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:38 crc kubenswrapper[4970]: I0930 10:01:38.046774 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.442666 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.444867 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.450327 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.450792 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.450831 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-rgrrh" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.457343 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.460979 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.492776 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmms\" (UniqueName: \"kubernetes.io/projected/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-kube-api-access-rqmms\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.492826 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.492868 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.492914 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.492939 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.492965 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.493355 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.493493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-config\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.594880 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.594953 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595010 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595110 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595164 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-config\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595213 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmms\" (UniqueName: \"kubernetes.io/projected/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-kube-api-access-rqmms\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595240 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595274 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595683 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.595911 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.597959 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.598114 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-config\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.608238 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.608246 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.610516 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.616873 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmms\" (UniqueName: \"kubernetes.io/projected/6fb84c2f-32c7-4ac2-b7aa-343846c86bfa-kube-api-access-rqmms\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.627379 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa\") " pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:39 crc kubenswrapper[4970]: I0930 10:01:39.779887 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.645659 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.756716 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-utilities\") pod \"4281f20f-ca65-49c5-9217-b9a730147510\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.756783 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-catalog-content\") pod \"4281f20f-ca65-49c5-9217-b9a730147510\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.756808 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qbk2\" (UniqueName: \"kubernetes.io/projected/4281f20f-ca65-49c5-9217-b9a730147510-kube-api-access-9qbk2\") pod \"4281f20f-ca65-49c5-9217-b9a730147510\" (UID: \"4281f20f-ca65-49c5-9217-b9a730147510\") " Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.758215 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-utilities" (OuterVolumeSpecName: "utilities") pod "4281f20f-ca65-49c5-9217-b9a730147510" (UID: "4281f20f-ca65-49c5-9217-b9a730147510"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.763433 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4281f20f-ca65-49c5-9217-b9a730147510-kube-api-access-9qbk2" (OuterVolumeSpecName: "kube-api-access-9qbk2") pod "4281f20f-ca65-49c5-9217-b9a730147510" (UID: "4281f20f-ca65-49c5-9217-b9a730147510"). InnerVolumeSpecName "kube-api-access-9qbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.809530 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4281f20f-ca65-49c5-9217-b9a730147510" (UID: "4281f20f-ca65-49c5-9217-b9a730147510"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.839313 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwzc" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.840148 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwzc" event={"ID":"4281f20f-ca65-49c5-9217-b9a730147510","Type":"ContainerDied","Data":"fe5ae57e7a87b34f0d4076c3552c6a43d68442c42502b2430362834c61a4f24a"} Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.840250 4970 scope.go:117] "RemoveContainer" containerID="5220b22e9bcc6fb53e90ad8bc0ad5bceb6b7462a041671b30197a3b30075051a" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.843022 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b","Type":"ContainerStarted","Data":"ee341c800a8c22799369ec6eef1af88313deebe22bfdd84e5fba1dc36e330941"} Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.860201 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.860247 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qbk2\" (UniqueName: \"kubernetes.io/projected/4281f20f-ca65-49c5-9217-b9a730147510-kube-api-access-9qbk2\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.860260 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4281f20f-ca65-49c5-9217-b9a730147510-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.887372 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wwzc"] Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.897903 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wwzc"] Sep 30 10:01:42 crc kubenswrapper[4970]: I0930 10:01:42.985332 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.573103 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.573288 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dxgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-fvzkh_openstack(1aeaa60a-eceb-45a4-b439-6a3197791a77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.574973 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" podUID="1aeaa60a-eceb-45a4-b439-6a3197791a77" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.585380 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.585777 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldcf8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-287w7_openstack(97bb6848-3a5f-49b5-90db-43e76990b684): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.587637 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" podUID="97bb6848-3a5f-49b5-90db-43e76990b684" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.604227 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.604471 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmgj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-75xl5_openstack(f63c6492-5afc-47b4-865c-7f2a1de471c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:01:43 crc kubenswrapper[4970]: E0930 10:01:43.605904 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" Sep 30 10:01:43 crc kubenswrapper[4970]: I0930 10:01:43.699202 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4281f20f-ca65-49c5-9217-b9a730147510" path="/var/lib/kubelet/pods/4281f20f-ca65-49c5-9217-b9a730147510/volumes" Sep 30 10:01:43 crc kubenswrapper[4970]: I0930 10:01:43.759235 4970 scope.go:117] "RemoveContainer" containerID="cad3284927321d7a4c6da0e2a19371f82e28e217909fbb8aaec78bf6c4b84bc0" Sep 30 10:01:43 crc kubenswrapper[4970]: I0930 10:01:43.874213 4970 scope.go:117] "RemoveContainer" containerID="bdc5adfd9a10d82e2ef3d211faa4c1388190462aee9bd1f7cd9cd9cb7966c168" Sep 30 10:01:43 crc kubenswrapper[4970]: I0930 10:01:43.947544 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b0c3a5-a622-4a16-aac7-b807588c48a7","Type":"ContainerStarted","Data":"72e7d130ea0dd730215b73f26d7d5c15ada9b25b99a5639d85fb55eb9d4dc7ca"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.251902 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Sep 30 10:01:44 crc kubenswrapper[4970]: E0930 10:01:44.277805 4970 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Sep 30 10:01:44 crc kubenswrapper[4970]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/f63c6492-5afc-47b4-865c-7f2a1de471c0/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Sep 30 10:01:44 crc kubenswrapper[4970]: > podSandboxID="45837baf2a0de2d74cca94097fb389a1bae8c01e9896d69d1fc23bca5fc8439d" Sep 30 10:01:44 crc kubenswrapper[4970]: E0930 10:01:44.278026 4970 kuberuntime_manager.go:1274] "Unhandled Error" err=< Sep 30 10:01:44 crc kubenswrapper[4970]: init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wmgj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-75xl5_openstack(f63c6492-5afc-47b4-865c-7f2a1de471c0): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/f63c6492-5afc-47b4-865c-7f2a1de471c0/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Sep 30 10:01:44 crc kubenswrapper[4970]: > logger="UnhandledError" Sep 30 10:01:44 crc kubenswrapper[4970]: E0930 10:01:44.279663 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/f63c6492-5afc-47b4-865c-7f2a1de471c0/volume-subpaths/dns-svc/init/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" Sep 30 10:01:44 crc kubenswrapper[4970]: W0930 10:01:44.303220 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddee5bc19_bb45_4962_adaf_ff6561817272.slice/crio-d6ed51ba7bf7230f9a4141aab4427a95aeca1bf0c9f4d9604b79e63862eb6972 WatchSource:0}: Error finding container d6ed51ba7bf7230f9a4141aab4427a95aeca1bf0c9f4d9604b79e63862eb6972: Status 404 returned error can't find the container with id d6ed51ba7bf7230f9a4141aab4427a95aeca1bf0c9f4d9604b79e63862eb6972 Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.412851 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.420017 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.426369 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.663103 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-h8572"] Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.670085 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.682456 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.694491 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vtdnt"] Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.721092 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dxgz\" (UniqueName: \"kubernetes.io/projected/1aeaa60a-eceb-45a4-b439-6a3197791a77-kube-api-access-7dxgz\") pod \"1aeaa60a-eceb-45a4-b439-6a3197791a77\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.721196 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldcf8\" (UniqueName: \"kubernetes.io/projected/97bb6848-3a5f-49b5-90db-43e76990b684-kube-api-access-ldcf8\") pod \"97bb6848-3a5f-49b5-90db-43e76990b684\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.721229 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-config\") pod \"1aeaa60a-eceb-45a4-b439-6a3197791a77\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.721289 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-dns-svc\") pod \"1aeaa60a-eceb-45a4-b439-6a3197791a77\" (UID: \"1aeaa60a-eceb-45a4-b439-6a3197791a77\") " Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.721361 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb6848-3a5f-49b5-90db-43e76990b684-config\") pod \"97bb6848-3a5f-49b5-90db-43e76990b684\" (UID: \"97bb6848-3a5f-49b5-90db-43e76990b684\") " Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.722962 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-config" (OuterVolumeSpecName: "config") pod "1aeaa60a-eceb-45a4-b439-6a3197791a77" (UID: "1aeaa60a-eceb-45a4-b439-6a3197791a77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.723549 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1aeaa60a-eceb-45a4-b439-6a3197791a77" (UID: "1aeaa60a-eceb-45a4-b439-6a3197791a77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.723841 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97bb6848-3a5f-49b5-90db-43e76990b684-config" (OuterVolumeSpecName: "config") pod "97bb6848-3a5f-49b5-90db-43e76990b684" (UID: "97bb6848-3a5f-49b5-90db-43e76990b684"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.729017 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1aeaa60a-eceb-45a4-b439-6a3197791a77-kube-api-access-7dxgz" (OuterVolumeSpecName: "kube-api-access-7dxgz") pod "1aeaa60a-eceb-45a4-b439-6a3197791a77" (UID: "1aeaa60a-eceb-45a4-b439-6a3197791a77"). InnerVolumeSpecName "kube-api-access-7dxgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.730102 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97bb6848-3a5f-49b5-90db-43e76990b684-kube-api-access-ldcf8" (OuterVolumeSpecName: "kube-api-access-ldcf8") pod "97bb6848-3a5f-49b5-90db-43e76990b684" (UID: "97bb6848-3a5f-49b5-90db-43e76990b684"). InnerVolumeSpecName "kube-api-access-ldcf8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.786677 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.823186 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.823217 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97bb6848-3a5f-49b5-90db-43e76990b684-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.823230 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dxgz\" (UniqueName: \"kubernetes.io/projected/1aeaa60a-eceb-45a4-b439-6a3197791a77-kube-api-access-7dxgz\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.823240 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldcf8\" (UniqueName: \"kubernetes.io/projected/97bb6848-3a5f-49b5-90db-43e76990b684-kube-api-access-ldcf8\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.823254 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1aeaa60a-eceb-45a4-b439-6a3197791a77-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.956900 4970 generic.go:334] "Generic (PLEG): container finished" podID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerID="6bbcdf868d540af5968103c68276fde46beb82eb040ed17ad23112e7972a0615" exitCode=0 Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.957230 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" event={"ID":"8cc17151-a37a-4aea-91c4-02211a139b5d","Type":"ContainerDied","Data":"6bbcdf868d540af5968103c68276fde46beb82eb040ed17ad23112e7972a0615"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.958840 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7bc5f72b-8b51-4a55-971a-83135118e627","Type":"ContainerStarted","Data":"7bf9ea78e61f7734f2406e7fcc3f0a90258cc8afb7d343365cca20bf7e4b5fa8"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.961260 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.961256 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-fvzkh" event={"ID":"1aeaa60a-eceb-45a4-b439-6a3197791a77","Type":"ContainerDied","Data":"470b1da91a1d6e8fa383ebf7bf254ed8ac90ca31b2d929da83f06b10b8085554"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.968938 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.968940 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-287w7" event={"ID":"97bb6848-3a5f-49b5-90db-43e76990b684","Type":"ContainerDied","Data":"c988817042efd91796d2d34ec0b9d7147673cde212f9b1953402963cc82e1fe5"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.970137 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d524179d-ea87-48d3-b87f-da18d0a059c8","Type":"ContainerStarted","Data":"926e6a70cb11535e8d72bbefc08e4ea8c1ecf2463e5d468120d8aebb77fdc653"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.971066 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dee5bc19-bb45-4962-adaf-ff6561817272","Type":"ContainerStarted","Data":"d6ed51ba7bf7230f9a4141aab4427a95aeca1bf0c9f4d9604b79e63862eb6972"} Sep 30 10:01:44 crc kubenswrapper[4970]: I0930 10:01:44.972786 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0b2bd53-a874-4d92-b137-24f772f5d61c","Type":"ContainerStarted","Data":"c37922e62f634d228d88b700a70cc73dc906bb8c45949613e14146318a56387c"} Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.052322 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fvzkh"] Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.063294 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-fvzkh"] Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.088908 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-287w7"] Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.095204 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-287w7"] Sep 30 10:01:45 crc kubenswrapper[4970]: W0930 10:01:45.127244 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0baee2b3_0d8f_4586_a636_c452b0d541d9.slice/crio-1f5c568370d104b58527c0e128e1b31bc627afa3c170ede15bd7fef419da934f WatchSource:0}: Error finding container 1f5c568370d104b58527c0e128e1b31bc627afa3c170ede15bd7fef419da934f: Status 404 returned error can't find the container with id 1f5c568370d104b58527c0e128e1b31bc627afa3c170ede15bd7fef419da934f Sep 30 10:01:45 crc kubenswrapper[4970]: W0930 10:01:45.129019 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea8f06d0_75e0_4ed8_9e37_086886b019e5.slice/crio-587e497b6a1bdaf4f5487c6d186b71bae162467f9e6459ad2f2742a25a7d7cf7 WatchSource:0}: Error finding container 587e497b6a1bdaf4f5487c6d186b71bae162467f9e6459ad2f2742a25a7d7cf7: Status 404 returned error can't find the container with id 587e497b6a1bdaf4f5487c6d186b71bae162467f9e6459ad2f2742a25a7d7cf7 Sep 30 10:01:45 crc kubenswrapper[4970]: W0930 10:01:45.133541 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fb84c2f_32c7_4ac2_b7aa_343846c86bfa.slice/crio-0d508bf2d8e1e9105589ca6c0db4358bb90d893d5e2bd0efa254690d48684de6 WatchSource:0}: Error finding container 0d508bf2d8e1e9105589ca6c0db4358bb90d893d5e2bd0efa254690d48684de6: Status 404 returned error can't find the container with id 0d508bf2d8e1e9105589ca6c0db4358bb90d893d5e2bd0efa254690d48684de6 Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.624527 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.693963 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1aeaa60a-eceb-45a4-b439-6a3197791a77" path="/var/lib/kubelet/pods/1aeaa60a-eceb-45a4-b439-6a3197791a77/volumes" Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.694367 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97bb6848-3a5f-49b5-90db-43e76990b684" path="/var/lib/kubelet/pods/97bb6848-3a5f-49b5-90db-43e76990b684/volumes" Sep 30 10:01:45 crc kubenswrapper[4970]: W0930 10:01:45.755436 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23d91298_5a5e_428e_afe3_f5625b74f3e0.slice/crio-7ea3ba0b0a78d177168f5ef1000008ed893772985449b7360182708d8ac23a3f WatchSource:0}: Error finding container 7ea3ba0b0a78d177168f5ef1000008ed893772985449b7360182708d8ac23a3f: Status 404 returned error can't find the container with id 7ea3ba0b0a78d177168f5ef1000008ed893772985449b7360182708d8ac23a3f Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.985782 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa","Type":"ContainerStarted","Data":"0d508bf2d8e1e9105589ca6c0db4358bb90d893d5e2bd0efa254690d48684de6"} Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.987533 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt" event={"ID":"ea8f06d0-75e0-4ed8-9e37-086886b019e5","Type":"ContainerStarted","Data":"587e497b6a1bdaf4f5487c6d186b71bae162467f9e6459ad2f2742a25a7d7cf7"} Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.991049 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"23d91298-5a5e-428e-afe3-f5625b74f3e0","Type":"ContainerStarted","Data":"7ea3ba0b0a78d177168f5ef1000008ed893772985449b7360182708d8ac23a3f"} Sep 30 10:01:45 crc kubenswrapper[4970]: I0930 10:01:45.994763 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h8572" event={"ID":"0baee2b3-0d8f-4586-a636-c452b0d541d9","Type":"ContainerStarted","Data":"1f5c568370d104b58527c0e128e1b31bc627afa3c170ede15bd7fef419da934f"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.066098 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b","Type":"ContainerStarted","Data":"29ab07917ac549895a0c4269a4435dc44198921dbc2444af8b1e7f2c089e8a31"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.074268 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d524179d-ea87-48d3-b87f-da18d0a059c8","Type":"ContainerStarted","Data":"1d8c176ab0160d79405cd2abf29a69e571b0056dec210767c2a2ef6085a4f40d"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.075211 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.078664 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dee5bc19-bb45-4962-adaf-ff6561817272","Type":"ContainerStarted","Data":"c1f5db3f9b42cce05ad2eec1ad8cb868ec7b0eed25042a692ea836d36718c32c"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.086819 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0b2bd53-a874-4d92-b137-24f772f5d61c","Type":"ContainerStarted","Data":"0fa72e8e7de3abae746e76bd2c494053bb66770c7d210235abd107e7a49c3403"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.095602 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b0c3a5-a622-4a16-aac7-b807588c48a7","Type":"ContainerStarted","Data":"41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.096454 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.102588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" event={"ID":"8cc17151-a37a-4aea-91c4-02211a139b5d","Type":"ContainerStarted","Data":"33fe198a15600c73017f35ca6769a76007f1ec9add44b8fd185fecf897dbcd9e"} Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.102790 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.186241 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=16.490439045 podStartE2EDuration="23.18621889s" podCreationTimestamp="2025-09-30 10:01:29 +0000 UTC" firstStartedPulling="2025-09-30 10:01:44.476363448 +0000 UTC m=+917.548214382" lastFinishedPulling="2025-09-30 10:01:51.172143293 +0000 UTC m=+924.243994227" observedRunningTime="2025-09-30 10:01:52.180532314 +0000 UTC m=+925.252383298" watchObservedRunningTime="2025-09-30 10:01:52.18621889 +0000 UTC m=+925.258069824" Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.219316 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.140931716 podStartE2EDuration="21.219271094s" podCreationTimestamp="2025-09-30 10:01:31 +0000 UTC" firstStartedPulling="2025-09-30 10:01:43.608941205 +0000 UTC m=+916.680792139" lastFinishedPulling="2025-09-30 10:01:51.687280583 +0000 UTC m=+924.759131517" observedRunningTime="2025-09-30 10:01:52.211780469 +0000 UTC m=+925.283631403" watchObservedRunningTime="2025-09-30 10:01:52.219271094 +0000 UTC m=+925.291122028" Sep 30 10:01:52 crc kubenswrapper[4970]: I0930 10:01:52.232113 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" podStartSLOduration=15.252816886 podStartE2EDuration="27.232088355s" podCreationTimestamp="2025-09-30 10:01:25 +0000 UTC" firstStartedPulling="2025-09-30 10:01:32.012825554 +0000 UTC m=+905.084676498" lastFinishedPulling="2025-09-30 10:01:43.992097033 +0000 UTC m=+917.063947967" observedRunningTime="2025-09-30 10:01:52.230345878 +0000 UTC m=+925.302196802" watchObservedRunningTime="2025-09-30 10:01:52.232088355 +0000 UTC m=+925.303939289" Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.115583 4970 generic.go:334] "Generic (PLEG): container finished" podID="0baee2b3-0d8f-4586-a636-c452b0d541d9" containerID="aded63e028925919a2a04948b8d3539bc2e0cc4857af1ed0002feb74b2c0a196" exitCode=0 Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.115723 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h8572" event={"ID":"0baee2b3-0d8f-4586-a636-c452b0d541d9","Type":"ContainerDied","Data":"aded63e028925919a2a04948b8d3539bc2e0cc4857af1ed0002feb74b2c0a196"} Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.118406 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa","Type":"ContainerStarted","Data":"beb17d629585ea6689417a92650e96c773aea0daa407430eb19cee03a3544c2f"} Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.120705 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt" event={"ID":"ea8f06d0-75e0-4ed8-9e37-086886b019e5","Type":"ContainerStarted","Data":"b52d238802c0d1c2ba344749b342cc3412f9e762f9ed4ccae3f447338e966354"} Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.121521 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vtdnt" Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.124014 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7bc5f72b-8b51-4a55-971a-83135118e627","Type":"ContainerStarted","Data":"bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194"} Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.127468 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"23d91298-5a5e-428e-afe3-f5625b74f3e0","Type":"ContainerStarted","Data":"d49559b4e551f30bfd1d2e1ebb4857dbfd72a577bc9dd5c7c54a3a80388f850d"} Sep 30 10:01:53 crc kubenswrapper[4970]: I0930 10:01:53.158347 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vtdnt" podStartSLOduration=12.563706132 podStartE2EDuration="19.158321707s" podCreationTimestamp="2025-09-30 10:01:34 +0000 UTC" firstStartedPulling="2025-09-30 10:01:45.133731682 +0000 UTC m=+918.205582616" lastFinishedPulling="2025-09-30 10:01:51.728347257 +0000 UTC m=+924.800198191" observedRunningTime="2025-09-30 10:01:53.155203552 +0000 UTC m=+926.227054486" watchObservedRunningTime="2025-09-30 10:01:53.158321707 +0000 UTC m=+926.230172641" Sep 30 10:01:54 crc kubenswrapper[4970]: I0930 10:01:54.150767 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h8572" event={"ID":"0baee2b3-0d8f-4586-a636-c452b0d541d9","Type":"ContainerStarted","Data":"712632d620c53b18110485dff65e4dfa1cb355bf459904d43f554ca9da7aebbe"} Sep 30 10:01:54 crc kubenswrapper[4970]: I0930 10:01:54.151664 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-h8572" event={"ID":"0baee2b3-0d8f-4586-a636-c452b0d541d9","Type":"ContainerStarted","Data":"aefed6cc327a8f63b56b7cffe43b51e1f4ee811f6f18bd7116ebb3000ec03e00"} Sep 30 10:01:54 crc kubenswrapper[4970]: I0930 10:01:54.179461 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-h8572" podStartSLOduration=13.58070835 podStartE2EDuration="20.179438847s" podCreationTimestamp="2025-09-30 10:01:34 +0000 UTC" firstStartedPulling="2025-09-30 10:01:45.129337182 +0000 UTC m=+918.201188116" lastFinishedPulling="2025-09-30 10:01:51.728067679 +0000 UTC m=+924.799918613" observedRunningTime="2025-09-30 10:01:54.172860427 +0000 UTC m=+927.244711381" watchObservedRunningTime="2025-09-30 10:01:54.179438847 +0000 UTC m=+927.251289781" Sep 30 10:01:55 crc kubenswrapper[4970]: I0930 10:01:55.040312 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:55 crc kubenswrapper[4970]: I0930 10:01:55.040387 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.168815 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"23d91298-5a5e-428e-afe3-f5625b74f3e0","Type":"ContainerStarted","Data":"aff1a1a09228ef60dd7e34243f460ba86d6c6819937e7b861e81f31b8cde7107"} Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.172668 4970 generic.go:334] "Generic (PLEG): container finished" podID="dee5bc19-bb45-4962-adaf-ff6561817272" containerID="c1f5db3f9b42cce05ad2eec1ad8cb868ec7b0eed25042a692ea836d36718c32c" exitCode=0 Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.172712 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dee5bc19-bb45-4962-adaf-ff6561817272","Type":"ContainerDied","Data":"c1f5db3f9b42cce05ad2eec1ad8cb868ec7b0eed25042a692ea836d36718c32c"} Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.174853 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6fb84c2f-32c7-4ac2-b7aa-343846c86bfa","Type":"ContainerStarted","Data":"d4ee609c1f921e5200c10fb5326d5e900757a1492932c4dc8a110292c22e9227"} Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.176696 4970 generic.go:334] "Generic (PLEG): container finished" podID="60d8ffcf-dc53-4fac-92a0-64136b4b0d4b" containerID="29ab07917ac549895a0c4269a4435dc44198921dbc2444af8b1e7f2c089e8a31" exitCode=0 Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.176790 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b","Type":"ContainerDied","Data":"29ab07917ac549895a0c4269a4435dc44198921dbc2444af8b1e7f2c089e8a31"} Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.193755 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.669785306 podStartE2EDuration="22.193708291s" podCreationTimestamp="2025-09-30 10:01:34 +0000 UTC" firstStartedPulling="2025-09-30 10:01:45.760497487 +0000 UTC m=+918.832348411" lastFinishedPulling="2025-09-30 10:01:55.284420462 +0000 UTC m=+928.356271396" observedRunningTime="2025-09-30 10:01:56.193047183 +0000 UTC m=+929.264898117" watchObservedRunningTime="2025-09-30 10:01:56.193708291 +0000 UTC m=+929.265559215" Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.254346 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=8.096987498 podStartE2EDuration="18.25432224s" podCreationTimestamp="2025-09-30 10:01:38 +0000 UTC" firstStartedPulling="2025-09-30 10:01:45.137783873 +0000 UTC m=+918.209634807" lastFinishedPulling="2025-09-30 10:01:55.295118615 +0000 UTC m=+928.366969549" observedRunningTime="2025-09-30 10:01:56.249525789 +0000 UTC m=+929.321376723" watchObservedRunningTime="2025-09-30 10:01:56.25432224 +0000 UTC m=+929.326173174" Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.374489 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:56 crc kubenswrapper[4970]: I0930 10:01:56.422901 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.189552 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"60d8ffcf-dc53-4fac-92a0-64136b4b0d4b","Type":"ContainerStarted","Data":"b270eda6923cb0c09293f7c0264944d936ef88166d044bbab11d009876de51ed"} Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.191828 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"dee5bc19-bb45-4962-adaf-ff6561817272","Type":"ContainerStarted","Data":"90926d74f7aeefc7907ec751faa3cdb3f0ca8e809f746e9bc461d02d678b2ae8"} Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.194534 4970 generic.go:334] "Generic (PLEG): container finished" podID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerID="02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1" exitCode=0 Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.194665 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" event={"ID":"f63c6492-5afc-47b4-865c-7f2a1de471c0","Type":"ContainerDied","Data":"02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1"} Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.195142 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.224626 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=20.144611843 podStartE2EDuration="29.224588437s" podCreationTimestamp="2025-09-30 10:01:28 +0000 UTC" firstStartedPulling="2025-09-30 10:01:42.570482041 +0000 UTC m=+915.642332975" lastFinishedPulling="2025-09-30 10:01:51.650458615 +0000 UTC m=+924.722309569" observedRunningTime="2025-09-30 10:01:57.219120928 +0000 UTC m=+930.290971892" watchObservedRunningTime="2025-09-30 10:01:57.224588437 +0000 UTC m=+930.296439411" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.254154 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.273738 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.894143792 podStartE2EDuration="31.273719002s" podCreationTimestamp="2025-09-30 10:01:26 +0000 UTC" firstStartedPulling="2025-09-30 10:01:44.310616262 +0000 UTC m=+917.382467196" lastFinishedPulling="2025-09-30 10:01:51.690191472 +0000 UTC m=+924.762042406" observedRunningTime="2025-09-30 10:01:57.270625387 +0000 UTC m=+930.342476321" watchObservedRunningTime="2025-09-30 10:01:57.273719002 +0000 UTC m=+930.345569936" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.555868 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-75xl5"] Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.578097 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-6v2k7"] Sep 30 10:01:57 crc kubenswrapper[4970]: E0930 10:01:57.578681 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="registry-server" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.578705 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="registry-server" Sep 30 10:01:57 crc kubenswrapper[4970]: E0930 10:01:57.578719 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="extract-content" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.578729 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="extract-content" Sep 30 10:01:57 crc kubenswrapper[4970]: E0930 10:01:57.578755 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="extract-utilities" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.578766 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="extract-utilities" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.579004 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4281f20f-ca65-49c5-9217-b9a730147510" containerName="registry-server" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.579876 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.581672 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.592090 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8r9mn"] Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.593766 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.600377 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.607391 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8r9mn"] Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.617194 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6v2k7"] Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.696805 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697226 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697256 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/20c3b444-9843-4584-81cb-9e5cb444c98b-ovs-rundir\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697283 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3b444-9843-4584-81cb-9e5cb444c98b-combined-ca-bundle\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697569 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-config\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697669 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwmq2\" (UniqueName: \"kubernetes.io/projected/20c3b444-9843-4584-81cb-9e5cb444c98b-kube-api-access-cwmq2\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697737 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c3b444-9843-4584-81cb-9e5cb444c98b-config\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697830 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c3b444-9843-4584-81cb-9e5cb444c98b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697905 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/20c3b444-9843-4584-81cb-9e5cb444c98b-ovn-rundir\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.697953 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5j9h\" (UniqueName: \"kubernetes.io/projected/70d70287-96b8-4814-8272-7e93e14c09a8-kube-api-access-w5j9h\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.781068 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.799820 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.799884 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.799912 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/20c3b444-9843-4584-81cb-9e5cb444c98b-ovs-rundir\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.799935 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3b444-9843-4584-81cb-9e5cb444c98b-combined-ca-bundle\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800006 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-config\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800040 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwmq2\" (UniqueName: \"kubernetes.io/projected/20c3b444-9843-4584-81cb-9e5cb444c98b-kube-api-access-cwmq2\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800071 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c3b444-9843-4584-81cb-9e5cb444c98b-config\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800104 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c3b444-9843-4584-81cb-9e5cb444c98b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800157 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/20c3b444-9843-4584-81cb-9e5cb444c98b-ovn-rundir\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800198 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5j9h\" (UniqueName: \"kubernetes.io/projected/70d70287-96b8-4814-8272-7e93e14c09a8-kube-api-access-w5j9h\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.800822 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/20c3b444-9843-4584-81cb-9e5cb444c98b-ovn-rundir\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.801062 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20c3b444-9843-4584-81cb-9e5cb444c98b-config\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.801093 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/20c3b444-9843-4584-81cb-9e5cb444c98b-ovs-rundir\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.801793 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.802323 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.802628 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-config\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.809147 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20c3b444-9843-4584-81cb-9e5cb444c98b-combined-ca-bundle\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.816781 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/20c3b444-9843-4584-81cb-9e5cb444c98b-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.817546 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwmq2\" (UniqueName: \"kubernetes.io/projected/20c3b444-9843-4584-81cb-9e5cb444c98b-kube-api-access-cwmq2\") pod \"ovn-controller-metrics-6v2k7\" (UID: \"20c3b444-9843-4584-81cb-9e5cb444c98b\") " pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.820446 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5j9h\" (UniqueName: \"kubernetes.io/projected/70d70287-96b8-4814-8272-7e93e14c09a8-kube-api-access-w5j9h\") pod \"dnsmasq-dns-5bf47b49b7-8r9mn\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.836490 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.910855 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-6v2k7" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.921303 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.949079 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6gn5k"] Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.949369 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerName="dnsmasq-dns" containerID="cri-o://33fe198a15600c73017f35ca6769a76007f1ec9add44b8fd185fecf897dbcd9e" gracePeriod=10 Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.956729 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.989324 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-fzxhw"] Sep 30 10:01:57 crc kubenswrapper[4970]: I0930 10:01:57.990896 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.000743 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.022153 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fzxhw"] Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.126849 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-dns-svc\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.127836 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.128078 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp98r\" (UniqueName: \"kubernetes.io/projected/6f7b0129-f9df-4117-bb02-d5798f57aa8e-kube-api-access-cp98r\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.128235 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.128326 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-config\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.220624 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" event={"ID":"f63c6492-5afc-47b4-865c-7f2a1de471c0","Type":"ContainerStarted","Data":"48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c"} Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.220720 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerName="dnsmasq-dns" containerID="cri-o://48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c" gracePeriod=10 Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.221166 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.226552 4970 generic.go:334] "Generic (PLEG): container finished" podID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerID="33fe198a15600c73017f35ca6769a76007f1ec9add44b8fd185fecf897dbcd9e" exitCode=0 Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.227644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" event={"ID":"8cc17151-a37a-4aea-91c4-02211a139b5d","Type":"ContainerDied","Data":"33fe198a15600c73017f35ca6769a76007f1ec9add44b8fd185fecf897dbcd9e"} Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.227760 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.229843 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-dns-svc\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.232458 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.232625 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp98r\" (UniqueName: \"kubernetes.io/projected/6f7b0129-f9df-4117-bb02-d5798f57aa8e-kube-api-access-cp98r\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.235575 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.235624 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-config\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.239439 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-dns-svc\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.239982 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.242282 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-config\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.244888 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.246098 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" podStartSLOduration=-9223372002.608692 podStartE2EDuration="34.246084548s" podCreationTimestamp="2025-09-30 10:01:24 +0000 UTC" firstStartedPulling="2025-09-30 10:01:25.736930668 +0000 UTC m=+898.808781602" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:01:58.239054765 +0000 UTC m=+931.310905699" watchObservedRunningTime="2025-09-30 10:01:58.246084548 +0000 UTC m=+931.317935482" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.270156 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp98r\" (UniqueName: \"kubernetes.io/projected/6f7b0129-f9df-4117-bb02-d5798f57aa8e-kube-api-access-cp98r\") pod \"dnsmasq-dns-8554648995-fzxhw\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.289349 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.289757 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.291932 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.447545 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.545805 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8r9mn"] Sep 30 10:01:58 crc kubenswrapper[4970]: W0930 10:01:58.560132 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70d70287_96b8_4814_8272_7e93e14c09a8.slice/crio-58a759f250a6d7034586779eb9464e565814e53f013cf6bf6e6124660e2eda38 WatchSource:0}: Error finding container 58a759f250a6d7034586779eb9464e565814e53f013cf6bf6e6124660e2eda38: Status 404 returned error can't find the container with id 58a759f250a6d7034586779eb9464e565814e53f013cf6bf6e6124660e2eda38 Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.664227 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.665804 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.675947 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.676182 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.676224 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.676276 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jfb5p" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.683690 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.683946 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.736942 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-6v2k7"] Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.756021 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-dns-svc\") pod \"8cc17151-a37a-4aea-91c4-02211a139b5d\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.756520 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-config\") pod \"8cc17151-a37a-4aea-91c4-02211a139b5d\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.756610 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvbcn\" (UniqueName: \"kubernetes.io/projected/8cc17151-a37a-4aea-91c4-02211a139b5d-kube-api-access-rvbcn\") pod \"8cc17151-a37a-4aea-91c4-02211a139b5d\" (UID: \"8cc17151-a37a-4aea-91c4-02211a139b5d\") " Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757006 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57282deb-d1f0-4e71-90e2-71c39075d208-config\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757057 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757148 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57282deb-d1f0-4e71-90e2-71c39075d208-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757179 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57282deb-d1f0-4e71-90e2-71c39075d208-scripts\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757212 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.757233 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6p9l\" (UniqueName: \"kubernetes.io/projected/57282deb-d1f0-4e71-90e2-71c39075d208-kube-api-access-d6p9l\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.792186 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cc17151-a37a-4aea-91c4-02211a139b5d-kube-api-access-rvbcn" (OuterVolumeSpecName: "kube-api-access-rvbcn") pod "8cc17151-a37a-4aea-91c4-02211a139b5d" (UID: "8cc17151-a37a-4aea-91c4-02211a139b5d"). InnerVolumeSpecName "kube-api-access-rvbcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.842556 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867530 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57282deb-d1f0-4e71-90e2-71c39075d208-config\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867596 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867628 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867687 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57282deb-d1f0-4e71-90e2-71c39075d208-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867723 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57282deb-d1f0-4e71-90e2-71c39075d208-scripts\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867755 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867779 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6p9l\" (UniqueName: \"kubernetes.io/projected/57282deb-d1f0-4e71-90e2-71c39075d208-kube-api-access-d6p9l\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.867823 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvbcn\" (UniqueName: \"kubernetes.io/projected/8cc17151-a37a-4aea-91c4-02211a139b5d-kube-api-access-rvbcn\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.869078 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/57282deb-d1f0-4e71-90e2-71c39075d208-config\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.869841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/57282deb-d1f0-4e71-90e2-71c39075d208-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.871120 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/57282deb-d1f0-4e71-90e2-71c39075d208-scripts\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.873951 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-config" (OuterVolumeSpecName: "config") pod "8cc17151-a37a-4aea-91c4-02211a139b5d" (UID: "8cc17151-a37a-4aea-91c4-02211a139b5d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.884650 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.909101 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6p9l\" (UniqueName: \"kubernetes.io/projected/57282deb-d1f0-4e71-90e2-71c39075d208-kube-api-access-d6p9l\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.909758 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.914444 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8cc17151-a37a-4aea-91c4-02211a139b5d" (UID: "8cc17151-a37a-4aea-91c4-02211a139b5d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.914591 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/57282deb-d1f0-4e71-90e2-71c39075d208-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"57282deb-d1f0-4e71-90e2-71c39075d208\") " pod="openstack/ovn-northd-0" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.968963 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgj9\" (UniqueName: \"kubernetes.io/projected/f63c6492-5afc-47b4-865c-7f2a1de471c0-kube-api-access-wmgj9\") pod \"f63c6492-5afc-47b4-865c-7f2a1de471c0\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.969564 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-config\") pod \"f63c6492-5afc-47b4-865c-7f2a1de471c0\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.969689 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-dns-svc\") pod \"f63c6492-5afc-47b4-865c-7f2a1de471c0\" (UID: \"f63c6492-5afc-47b4-865c-7f2a1de471c0\") " Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.970370 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.970413 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cc17151-a37a-4aea-91c4-02211a139b5d-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:58 crc kubenswrapper[4970]: I0930 10:01:58.975085 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f63c6492-5afc-47b4-865c-7f2a1de471c0-kube-api-access-wmgj9" (OuterVolumeSpecName: "kube-api-access-wmgj9") pod "f63c6492-5afc-47b4-865c-7f2a1de471c0" (UID: "f63c6492-5afc-47b4-865c-7f2a1de471c0"). InnerVolumeSpecName "kube-api-access-wmgj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.015385 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f63c6492-5afc-47b4-865c-7f2a1de471c0" (UID: "f63c6492-5afc-47b4-865c-7f2a1de471c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.023659 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-config" (OuterVolumeSpecName: "config") pod "f63c6492-5afc-47b4-865c-7f2a1de471c0" (UID: "f63c6492-5afc-47b4-865c-7f2a1de471c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.045997 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.072441 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.072489 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f63c6492-5afc-47b4-865c-7f2a1de471c0-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.072502 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgj9\" (UniqueName: \"kubernetes.io/projected/f63c6492-5afc-47b4-865c-7f2a1de471c0-kube-api-access-wmgj9\") on node \"crc\" DevicePath \"\"" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.215680 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fzxhw"] Sep 30 10:01:59 crc kubenswrapper[4970]: W0930 10:01:59.216769 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f7b0129_f9df_4117_bb02_d5798f57aa8e.slice/crio-877a11713c8ea49080497918e1519b8db0e79a97897f73876cf2dbcc9ce12d2e WatchSource:0}: Error finding container 877a11713c8ea49080497918e1519b8db0e79a97897f73876cf2dbcc9ce12d2e: Status 404 returned error can't find the container with id 877a11713c8ea49080497918e1519b8db0e79a97897f73876cf2dbcc9ce12d2e Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.239034 4970 generic.go:334] "Generic (PLEG): container finished" podID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerID="48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c" exitCode=0 Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.239120 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" event={"ID":"f63c6492-5afc-47b4-865c-7f2a1de471c0","Type":"ContainerDied","Data":"48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.239153 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" event={"ID":"f63c6492-5afc-47b4-865c-7f2a1de471c0","Type":"ContainerDied","Data":"45837baf2a0de2d74cca94097fb389a1bae8c01e9896d69d1fc23bca5fc8439d"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.239171 4970 scope.go:117] "RemoveContainer" containerID="48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.239322 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-75xl5" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.245155 4970 generic.go:334] "Generic (PLEG): container finished" podID="70d70287-96b8-4814-8272-7e93e14c09a8" containerID="0a84e31a0e00d46fcb13cbc71de2f111e59bc283249299112271da3220f4df06" exitCode=0 Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.245276 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" event={"ID":"70d70287-96b8-4814-8272-7e93e14c09a8","Type":"ContainerDied","Data":"0a84e31a0e00d46fcb13cbc71de2f111e59bc283249299112271da3220f4df06"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.245317 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" event={"ID":"70d70287-96b8-4814-8272-7e93e14c09a8","Type":"ContainerStarted","Data":"58a759f250a6d7034586779eb9464e565814e53f013cf6bf6e6124660e2eda38"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.251304 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" event={"ID":"8cc17151-a37a-4aea-91c4-02211a139b5d","Type":"ContainerDied","Data":"3c09be5307a0436adb635a405fa337c01fbacd59cfc54aca9c2d8c6b05b032bd"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.251331 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-6gn5k" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.253130 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fzxhw" event={"ID":"6f7b0129-f9df-4117-bb02-d5798f57aa8e","Type":"ContainerStarted","Data":"877a11713c8ea49080497918e1519b8db0e79a97897f73876cf2dbcc9ce12d2e"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.265146 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6v2k7" event={"ID":"20c3b444-9843-4584-81cb-9e5cb444c98b","Type":"ContainerStarted","Data":"f1f168ef06c1c40155ba8b4036b28a0b6074f7c8c9cc6ef52e7c87b86493c98b"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.265208 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-6v2k7" event={"ID":"20c3b444-9843-4584-81cb-9e5cb444c98b","Type":"ContainerStarted","Data":"307e903dc41e92bc4c21ef719bc5183f2f304493a1d8f02bc88fafd0fc1d3f89"} Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.279901 4970 scope.go:117] "RemoveContainer" containerID="02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.298797 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-75xl5"] Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.307212 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-75xl5"] Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.326604 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-6v2k7" podStartSLOduration=2.326569643 podStartE2EDuration="2.326569643s" podCreationTimestamp="2025-09-30 10:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:01:59.313626448 +0000 UTC m=+932.385477412" watchObservedRunningTime="2025-09-30 10:01:59.326569643 +0000 UTC m=+932.398420577" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.356097 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6gn5k"] Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.369902 4970 scope.go:117] "RemoveContainer" containerID="48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c" Sep 30 10:01:59 crc kubenswrapper[4970]: E0930 10:01:59.374130 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c\": container with ID starting with 48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c not found: ID does not exist" containerID="48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.374185 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c"} err="failed to get container status \"48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c\": rpc error: code = NotFound desc = could not find container \"48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c\": container with ID starting with 48b721e903cc876ea0c838442ff64dc83d8f35ee525c660e8902eda23066a29c not found: ID does not exist" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.374218 4970 scope.go:117] "RemoveContainer" containerID="02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1" Sep 30 10:01:59 crc kubenswrapper[4970]: E0930 10:01:59.379775 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1\": container with ID starting with 02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1 not found: ID does not exist" containerID="02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.379847 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1"} err="failed to get container status \"02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1\": rpc error: code = NotFound desc = could not find container \"02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1\": container with ID starting with 02205e241098c75ec54976ecaecf1a56ffccd3927ea7dcdc16d906c5f9d453f1 not found: ID does not exist" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.379914 4970 scope.go:117] "RemoveContainer" containerID="33fe198a15600c73017f35ca6769a76007f1ec9add44b8fd185fecf897dbcd9e" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.387638 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-6gn5k"] Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.472875 4970 scope.go:117] "RemoveContainer" containerID="6bbcdf868d540af5968103c68276fde46beb82eb040ed17ad23112e7972a0615" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.544146 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.544569 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.564671 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Sep 30 10:01:59 crc kubenswrapper[4970]: W0930 10:01:59.572429 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57282deb_d1f0_4e71_90e2_71c39075d208.slice/crio-087eae02c9477416362a92399b6bfd495820f8193aa0e26733e87b22fb856010 WatchSource:0}: Error finding container 087eae02c9477416362a92399b6bfd495820f8193aa0e26733e87b22fb856010: Status 404 returned error can't find the container with id 087eae02c9477416362a92399b6bfd495820f8193aa0e26733e87b22fb856010 Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.685817 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" path="/var/lib/kubelet/pods/8cc17151-a37a-4aea-91c4-02211a139b5d/volumes" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.686496 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" path="/var/lib/kubelet/pods/f63c6492-5afc-47b4-865c-7f2a1de471c0/volumes" Sep 30 10:01:59 crc kubenswrapper[4970]: I0930 10:01:59.794129 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Sep 30 10:02:00 crc kubenswrapper[4970]: I0930 10:02:00.274353 4970 generic.go:334] "Generic (PLEG): container finished" podID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerID="7717e1ebe633b757f10a61fba4e43c8f70e159350e71e4f2083ae1a532b1ce42" exitCode=0 Sep 30 10:02:00 crc kubenswrapper[4970]: I0930 10:02:00.274490 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fzxhw" event={"ID":"6f7b0129-f9df-4117-bb02-d5798f57aa8e","Type":"ContainerDied","Data":"7717e1ebe633b757f10a61fba4e43c8f70e159350e71e4f2083ae1a532b1ce42"} Sep 30 10:02:00 crc kubenswrapper[4970]: I0930 10:02:00.278483 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"57282deb-d1f0-4e71-90e2-71c39075d208","Type":"ContainerStarted","Data":"087eae02c9477416362a92399b6bfd495820f8193aa0e26733e87b22fb856010"} Sep 30 10:02:00 crc kubenswrapper[4970]: I0930 10:02:00.281862 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" event={"ID":"70d70287-96b8-4814-8272-7e93e14c09a8","Type":"ContainerStarted","Data":"969d307fc2c7f87c28f32a6168eaae4cf7c00e8e3590f2328767a5987b38122c"} Sep 30 10:02:00 crc kubenswrapper[4970]: I0930 10:02:00.282008 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:02:00 crc kubenswrapper[4970]: I0930 10:02:00.335373 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" podStartSLOduration=3.335343744 podStartE2EDuration="3.335343744s" podCreationTimestamp="2025-09-30 10:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:00.326972005 +0000 UTC m=+933.398822949" watchObservedRunningTime="2025-09-30 10:02:00.335343744 +0000 UTC m=+933.407194678" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.300522 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fzxhw" event={"ID":"6f7b0129-f9df-4117-bb02-d5798f57aa8e","Type":"ContainerStarted","Data":"93ee17f7bc5c56770e9db93758388a717b9debb96818c5240c4b90b4a41ac9fd"} Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.330596 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-fzxhw" podStartSLOduration=4.330564645 podStartE2EDuration="4.330564645s" podCreationTimestamp="2025-09-30 10:01:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:01.31945034 +0000 UTC m=+934.391301274" watchObservedRunningTime="2025-09-30 10:02:01.330564645 +0000 UTC m=+934.402415579" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.623139 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8r9mn"] Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669291 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ghm66"] Sep 30 10:02:01 crc kubenswrapper[4970]: E0930 10:02:01.669632 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerName="init" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669649 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerName="init" Sep 30 10:02:01 crc kubenswrapper[4970]: E0930 10:02:01.669672 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerName="init" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669678 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerName="init" Sep 30 10:02:01 crc kubenswrapper[4970]: E0930 10:02:01.669691 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerName="dnsmasq-dns" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669697 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerName="dnsmasq-dns" Sep 30 10:02:01 crc kubenswrapper[4970]: E0930 10:02:01.669713 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerName="dnsmasq-dns" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669721 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerName="dnsmasq-dns" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669865 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cc17151-a37a-4aea-91c4-02211a139b5d" containerName="dnsmasq-dns" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.669886 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f63c6492-5afc-47b4-865c-7f2a1de471c0" containerName="dnsmasq-dns" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.671852 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.698294 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.698334 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ghm66"] Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.751081 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.751123 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.751179 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-config\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.751212 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.751294 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttlgw\" (UniqueName: \"kubernetes.io/projected/9227c3d4-a39d-4316-9153-157469a2d006-kube-api-access-ttlgw\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.852811 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttlgw\" (UniqueName: \"kubernetes.io/projected/9227c3d4-a39d-4316-9153-157469a2d006-kube-api-access-ttlgw\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.853349 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.853391 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.853451 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-config\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.853479 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.854551 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.855608 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.857702 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-config\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.858690 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.875666 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttlgw\" (UniqueName: \"kubernetes.io/projected/9227c3d4-a39d-4316-9153-157469a2d006-kube-api-access-ttlgw\") pod \"dnsmasq-dns-b8fbc5445-ghm66\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:01 crc kubenswrapper[4970]: I0930 10:02:01.998703 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.310788 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.375756 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.477206 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="dee5bc19-bb45-4962-adaf-ff6561817272" containerName="galera" probeResult="failure" output=< Sep 30 10:02:02 crc kubenswrapper[4970]: wsrep_local_state_comment (Joined) differs from Synced Sep 30 10:02:02 crc kubenswrapper[4970]: > Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.507292 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ghm66"] Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.764920 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.772581 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.777475 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.778167 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.778544 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.782560 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-h29qm" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.793546 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.873967 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/420e577e-2e62-4d35-b9c7-f354dd81add8-cache\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.874071 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.874139 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/420e577e-2e62-4d35-b9c7-f354dd81add8-lock\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.874187 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpz9f\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-kube-api-access-rpz9f\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.874225 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.976220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/420e577e-2e62-4d35-b9c7-f354dd81add8-lock\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.976285 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpz9f\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-kube-api-access-rpz9f\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.976331 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.976376 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/420e577e-2e62-4d35-b9c7-f354dd81add8-cache\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.976416 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: E0930 10:02:02.976641 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 10:02:02 crc kubenswrapper[4970]: E0930 10:02:02.976684 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 10:02:02 crc kubenswrapper[4970]: E0930 10:02:02.976779 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift podName:420e577e-2e62-4d35-b9c7-f354dd81add8 nodeName:}" failed. No retries permitted until 2025-09-30 10:02:03.476748464 +0000 UTC m=+936.548599398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift") pod "swift-storage-0" (UID: "420e577e-2e62-4d35-b9c7-f354dd81add8") : configmap "swift-ring-files" not found Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.976876 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.977030 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/420e577e-2e62-4d35-b9c7-f354dd81add8-lock\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.977157 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/420e577e-2e62-4d35-b9c7-f354dd81add8-cache\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.995335 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpz9f\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-kube-api-access-rpz9f\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:02 crc kubenswrapper[4970]: I0930 10:02:02.999688 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:03 crc kubenswrapper[4970]: I0930 10:02:03.320549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"57282deb-d1f0-4e71-90e2-71c39075d208","Type":"ContainerStarted","Data":"0ad425a0b4c96fbd4b929ecb6469d7bda50f76bcaf6009187615d1924dc7e9ba"} Sep 30 10:02:03 crc kubenswrapper[4970]: I0930 10:02:03.323969 4970 generic.go:334] "Generic (PLEG): container finished" podID="9227c3d4-a39d-4316-9153-157469a2d006" containerID="19d6092f32f466986ea05755f91995a1779619fd4bd7df08cc2ea1f9d07c7360" exitCode=0 Sep 30 10:02:03 crc kubenswrapper[4970]: I0930 10:02:03.326364 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" event={"ID":"9227c3d4-a39d-4316-9153-157469a2d006","Type":"ContainerDied","Data":"19d6092f32f466986ea05755f91995a1779619fd4bd7df08cc2ea1f9d07c7360"} Sep 30 10:02:03 crc kubenswrapper[4970]: I0930 10:02:03.328365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" event={"ID":"9227c3d4-a39d-4316-9153-157469a2d006","Type":"ContainerStarted","Data":"1d5828b2e963b0427e9b8ecbbb55bef7e57cce5220184dba9261acecef704013"} Sep 30 10:02:03 crc kubenswrapper[4970]: I0930 10:02:03.327479 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="dnsmasq-dns" containerID="cri-o://969d307fc2c7f87c28f32a6168eaae4cf7c00e8e3590f2328767a5987b38122c" gracePeriod=10 Sep 30 10:02:04 crc kubenswrapper[4970]: I0930 10:02:03.489107 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:04 crc kubenswrapper[4970]: E0930 10:02:03.489514 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 10:02:04 crc kubenswrapper[4970]: E0930 10:02:03.489533 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 10:02:04 crc kubenswrapper[4970]: E0930 10:02:03.489583 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift podName:420e577e-2e62-4d35-b9c7-f354dd81add8 nodeName:}" failed. No retries permitted until 2025-09-30 10:02:04.48956183 +0000 UTC m=+937.561412764 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift") pod "swift-storage-0" (UID: "420e577e-2e62-4d35-b9c7-f354dd81add8") : configmap "swift-ring-files" not found Sep 30 10:02:04 crc kubenswrapper[4970]: I0930 10:02:04.334981 4970 generic.go:334] "Generic (PLEG): container finished" podID="70d70287-96b8-4814-8272-7e93e14c09a8" containerID="969d307fc2c7f87c28f32a6168eaae4cf7c00e8e3590f2328767a5987b38122c" exitCode=0 Sep 30 10:02:04 crc kubenswrapper[4970]: I0930 10:02:04.335040 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" event={"ID":"70d70287-96b8-4814-8272-7e93e14c09a8","Type":"ContainerDied","Data":"969d307fc2c7f87c28f32a6168eaae4cf7c00e8e3590f2328767a5987b38122c"} Sep 30 10:02:04 crc kubenswrapper[4970]: I0930 10:02:04.508613 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:04 crc kubenswrapper[4970]: E0930 10:02:04.508913 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 10:02:04 crc kubenswrapper[4970]: E0930 10:02:04.508956 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 10:02:04 crc kubenswrapper[4970]: E0930 10:02:04.509056 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift podName:420e577e-2e62-4d35-b9c7-f354dd81add8 nodeName:}" failed. No retries permitted until 2025-09-30 10:02:06.509027844 +0000 UTC m=+939.580878778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift") pod "swift-storage-0" (UID: "420e577e-2e62-4d35-b9c7-f354dd81add8") : configmap "swift-ring-files" not found Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.551505 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:06 crc kubenswrapper[4970]: E0930 10:02:06.551836 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 10:02:06 crc kubenswrapper[4970]: E0930 10:02:06.552032 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 10:02:06 crc kubenswrapper[4970]: E0930 10:02:06.552096 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift podName:420e577e-2e62-4d35-b9c7-f354dd81add8 nodeName:}" failed. No retries permitted until 2025-09-30 10:02:10.552077056 +0000 UTC m=+943.623927990 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift") pod "swift-storage-0" (UID: "420e577e-2e62-4d35-b9c7-f354dd81add8") : configmap "swift-ring-files" not found Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.723530 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ctbn9"] Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.724664 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.727625 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.727651 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.736458 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.738605 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ctbn9"] Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.858544 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-combined-ca-bundle\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.858663 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-dispersionconf\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.858711 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-swiftconf\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.859195 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-ring-data-devices\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.859339 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-scripts\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.859441 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwlc2\" (UniqueName: \"kubernetes.io/projected/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-kube-api-access-hwlc2\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.859567 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-etc-swift\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962222 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-ring-data-devices\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962282 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-scripts\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962316 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwlc2\" (UniqueName: \"kubernetes.io/projected/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-kube-api-access-hwlc2\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962347 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-etc-swift\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962371 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-combined-ca-bundle\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962397 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-dispersionconf\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.962419 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-swiftconf\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.963129 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-etc-swift\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.963514 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-ring-data-devices\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.963562 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-scripts\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.969328 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-swiftconf\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.971845 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-combined-ca-bundle\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.972379 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-dispersionconf\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:06 crc kubenswrapper[4970]: I0930 10:02:06.984738 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwlc2\" (UniqueName: \"kubernetes.io/projected/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-kube-api-access-hwlc2\") pod \"swift-ring-rebalance-ctbn9\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:07 crc kubenswrapper[4970]: I0930 10:02:07.059930 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:07 crc kubenswrapper[4970]: I0930 10:02:07.362880 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"57282deb-d1f0-4e71-90e2-71c39075d208","Type":"ContainerStarted","Data":"e4fb6693c01bbf9bd761a10b5e9ebb9ffbe9c59fb72dace5ca54cc3e2cbd4f67"} Sep 30 10:02:07 crc kubenswrapper[4970]: I0930 10:02:07.521542 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ctbn9"] Sep 30 10:02:07 crc kubenswrapper[4970]: W0930 10:02:07.526253 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a77b5ae_8010_4d5d_8cec_ae87b4fba9d5.slice/crio-575fc1112c42f67da1cddd6215c376fa6a896ce93422258f1dd8d12d489ea576 WatchSource:0}: Error finding container 575fc1112c42f67da1cddd6215c376fa6a896ce93422258f1dd8d12d489ea576: Status 404 returned error can't find the container with id 575fc1112c42f67da1cddd6215c376fa6a896ce93422258f1dd8d12d489ea576 Sep 30 10:02:07 crc kubenswrapper[4970]: I0930 10:02:07.926124 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.364038 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.418853 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.420333 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" event={"ID":"9227c3d4-a39d-4316-9153-157469a2d006","Type":"ContainerStarted","Data":"80faabc3fd5d18710cefdae7723e0c7061d9dc1450004a3e1ccae13c10a5d8a6"} Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.420715 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.425772 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ctbn9" event={"ID":"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5","Type":"ContainerStarted","Data":"575fc1112c42f67da1cddd6215c376fa6a896ce93422258f1dd8d12d489ea576"} Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.425814 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.456057 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.468169 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" podStartSLOduration=7.468128822 podStartE2EDuration="7.468128822s" podCreationTimestamp="2025-09-30 10:02:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:08.467927136 +0000 UTC m=+941.539778070" watchObservedRunningTime="2025-09-30 10:02:08.468128822 +0000 UTC m=+941.539979756" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.483720 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="60d8ffcf-dc53-4fac-92a0-64136b4b0d4b" containerName="galera" probeResult="failure" output=< Sep 30 10:02:08 crc kubenswrapper[4970]: wsrep_local_state_comment (Joined) differs from Synced Sep 30 10:02:08 crc kubenswrapper[4970]: > Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.527383 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=7.585910741 podStartE2EDuration="10.527360833s" podCreationTimestamp="2025-09-30 10:01:58 +0000 UTC" firstStartedPulling="2025-09-30 10:01:59.577300586 +0000 UTC m=+932.649151520" lastFinishedPulling="2025-09-30 10:02:02.518750678 +0000 UTC m=+935.590601612" observedRunningTime="2025-09-30 10:02:08.526753946 +0000 UTC m=+941.598604870" watchObservedRunningTime="2025-09-30 10:02:08.527360833 +0000 UTC m=+941.599211757" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.612588 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.709297 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-ovsdbserver-nb\") pod \"70d70287-96b8-4814-8272-7e93e14c09a8\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.709363 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-dns-svc\") pod \"70d70287-96b8-4814-8272-7e93e14c09a8\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.709437 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5j9h\" (UniqueName: \"kubernetes.io/projected/70d70287-96b8-4814-8272-7e93e14c09a8-kube-api-access-w5j9h\") pod \"70d70287-96b8-4814-8272-7e93e14c09a8\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.709556 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-config\") pod \"70d70287-96b8-4814-8272-7e93e14c09a8\" (UID: \"70d70287-96b8-4814-8272-7e93e14c09a8\") " Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.720835 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d70287-96b8-4814-8272-7e93e14c09a8-kube-api-access-w5j9h" (OuterVolumeSpecName: "kube-api-access-w5j9h") pod "70d70287-96b8-4814-8272-7e93e14c09a8" (UID: "70d70287-96b8-4814-8272-7e93e14c09a8"). InnerVolumeSpecName "kube-api-access-w5j9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.761966 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "70d70287-96b8-4814-8272-7e93e14c09a8" (UID: "70d70287-96b8-4814-8272-7e93e14c09a8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.764511 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-config" (OuterVolumeSpecName: "config") pod "70d70287-96b8-4814-8272-7e93e14c09a8" (UID: "70d70287-96b8-4814-8272-7e93e14c09a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.766450 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "70d70287-96b8-4814-8272-7e93e14c09a8" (UID: "70d70287-96b8-4814-8272-7e93e14c09a8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.812370 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.812415 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.812429 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5j9h\" (UniqueName: \"kubernetes.io/projected/70d70287-96b8-4814-8272-7e93e14c09a8-kube-api-access-w5j9h\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:08 crc kubenswrapper[4970]: I0930 10:02:08.812442 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70d70287-96b8-4814-8272-7e93e14c09a8-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.445321 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" event={"ID":"70d70287-96b8-4814-8272-7e93e14c09a8","Type":"ContainerDied","Data":"58a759f250a6d7034586779eb9464e565814e53f013cf6bf6e6124660e2eda38"} Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.445378 4970 scope.go:117] "RemoveContainer" containerID="969d307fc2c7f87c28f32a6168eaae4cf7c00e8e3590f2328767a5987b38122c" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.445397 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-8r9mn" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.456891 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-dbz5v"] Sep 30 10:02:09 crc kubenswrapper[4970]: E0930 10:02:09.457519 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="init" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.457547 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="init" Sep 30 10:02:09 crc kubenswrapper[4970]: E0930 10:02:09.457564 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="dnsmasq-dns" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.457576 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="dnsmasq-dns" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.457803 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" containerName="dnsmasq-dns" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.458634 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.478353 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dbz5v"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.506565 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8r9mn"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.512402 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-8r9mn"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.594965 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.631549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tbts\" (UniqueName: \"kubernetes.io/projected/22f61c1e-f425-4745-ab1c-b93977f1152c-kube-api-access-2tbts\") pod \"keystone-db-create-dbz5v\" (UID: \"22f61c1e-f425-4745-ab1c-b93977f1152c\") " pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.682000 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d70287-96b8-4814-8272-7e93e14c09a8" path="/var/lib/kubelet/pods/70d70287-96b8-4814-8272-7e93e14c09a8/volumes" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.685155 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-mn485"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.686518 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mn485" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.693008 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mn485"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.733240 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tbts\" (UniqueName: \"kubernetes.io/projected/22f61c1e-f425-4745-ab1c-b93977f1152c-kube-api-access-2tbts\") pod \"keystone-db-create-dbz5v\" (UID: \"22f61c1e-f425-4745-ab1c-b93977f1152c\") " pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.756678 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tbts\" (UniqueName: \"kubernetes.io/projected/22f61c1e-f425-4745-ab1c-b93977f1152c-kube-api-access-2tbts\") pod \"keystone-db-create-dbz5v\" (UID: \"22f61c1e-f425-4745-ab1c-b93977f1152c\") " pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.785387 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.836527 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krgs8\" (UniqueName: \"kubernetes.io/projected/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2-kube-api-access-krgs8\") pod \"placement-db-create-mn485\" (UID: \"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2\") " pod="openstack/placement-db-create-mn485" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.939558 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krgs8\" (UniqueName: \"kubernetes.io/projected/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2-kube-api-access-krgs8\") pod \"placement-db-create-mn485\" (UID: \"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2\") " pod="openstack/placement-db-create-mn485" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.940523 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-rjrqv"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.942061 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.960811 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rjrqv"] Sep 30 10:02:09 crc kubenswrapper[4970]: I0930 10:02:09.963173 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krgs8\" (UniqueName: \"kubernetes.io/projected/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2-kube-api-access-krgs8\") pod \"placement-db-create-mn485\" (UID: \"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2\") " pod="openstack/placement-db-create-mn485" Sep 30 10:02:10 crc kubenswrapper[4970]: I0930 10:02:10.013607 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mn485" Sep 30 10:02:10 crc kubenswrapper[4970]: I0930 10:02:10.041886 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfmx6\" (UniqueName: \"kubernetes.io/projected/576a687d-6e95-4703-829e-574a84a838dd-kube-api-access-lfmx6\") pod \"glance-db-create-rjrqv\" (UID: \"576a687d-6e95-4703-829e-574a84a838dd\") " pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:10 crc kubenswrapper[4970]: I0930 10:02:10.143918 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfmx6\" (UniqueName: \"kubernetes.io/projected/576a687d-6e95-4703-829e-574a84a838dd-kube-api-access-lfmx6\") pod \"glance-db-create-rjrqv\" (UID: \"576a687d-6e95-4703-829e-574a84a838dd\") " pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:10 crc kubenswrapper[4970]: I0930 10:02:10.170056 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfmx6\" (UniqueName: \"kubernetes.io/projected/576a687d-6e95-4703-829e-574a84a838dd-kube-api-access-lfmx6\") pod \"glance-db-create-rjrqv\" (UID: \"576a687d-6e95-4703-829e-574a84a838dd\") " pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:10 crc kubenswrapper[4970]: I0930 10:02:10.299223 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:10 crc kubenswrapper[4970]: I0930 10:02:10.552289 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:10 crc kubenswrapper[4970]: E0930 10:02:10.552537 4970 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Sep 30 10:02:10 crc kubenswrapper[4970]: E0930 10:02:10.552571 4970 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Sep 30 10:02:10 crc kubenswrapper[4970]: E0930 10:02:10.552652 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift podName:420e577e-2e62-4d35-b9c7-f354dd81add8 nodeName:}" failed. No retries permitted until 2025-09-30 10:02:18.552627837 +0000 UTC m=+951.624478771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift") pod "swift-storage-0" (UID: "420e577e-2e62-4d35-b9c7-f354dd81add8") : configmap "swift-ring-files" not found Sep 30 10:02:11 crc kubenswrapper[4970]: I0930 10:02:11.269178 4970 scope.go:117] "RemoveContainer" containerID="0a84e31a0e00d46fcb13cbc71de2f111e59bc283249299112271da3220f4df06" Sep 30 10:02:11 crc kubenswrapper[4970]: I0930 10:02:11.787577 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-mn485"] Sep 30 10:02:11 crc kubenswrapper[4970]: W0930 10:02:11.791085 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35cfe6c2_e537_48d0_bb4f_ce375ea3c6f2.slice/crio-4ab30fd7474fd56892b1382f02b2b8304bdbd05be8e69fcff87211f00db75d10 WatchSource:0}: Error finding container 4ab30fd7474fd56892b1382f02b2b8304bdbd05be8e69fcff87211f00db75d10: Status 404 returned error can't find the container with id 4ab30fd7474fd56892b1382f02b2b8304bdbd05be8e69fcff87211f00db75d10 Sep 30 10:02:11 crc kubenswrapper[4970]: I0930 10:02:11.857828 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-rjrqv"] Sep 30 10:02:11 crc kubenswrapper[4970]: I0930 10:02:11.867724 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-dbz5v"] Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.487338 4970 generic.go:334] "Generic (PLEG): container finished" podID="576a687d-6e95-4703-829e-574a84a838dd" containerID="b6e72d7381a26467ffd4101687b0c1c934d894b2036581bbe57f243de287752b" exitCode=0 Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.487408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rjrqv" event={"ID":"576a687d-6e95-4703-829e-574a84a838dd","Type":"ContainerDied","Data":"b6e72d7381a26467ffd4101687b0c1c934d894b2036581bbe57f243de287752b"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.487466 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rjrqv" event={"ID":"576a687d-6e95-4703-829e-574a84a838dd","Type":"ContainerStarted","Data":"3dfc1c28393df731d9e7ce95a9229110ae4e09f366918ab0095e5a44643cc2d7"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.489449 4970 generic.go:334] "Generic (PLEG): container finished" podID="35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2" containerID="50c1dc7707fc97ae3ab745c806d73cca2ec9b6fe0b80b07b3450867cce49f4d6" exitCode=0 Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.489532 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mn485" event={"ID":"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2","Type":"ContainerDied","Data":"50c1dc7707fc97ae3ab745c806d73cca2ec9b6fe0b80b07b3450867cce49f4d6"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.489560 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mn485" event={"ID":"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2","Type":"ContainerStarted","Data":"4ab30fd7474fd56892b1382f02b2b8304bdbd05be8e69fcff87211f00db75d10"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.491339 4970 generic.go:334] "Generic (PLEG): container finished" podID="22f61c1e-f425-4745-ab1c-b93977f1152c" containerID="f1eda0329c22c2f97835dff741ea11ecafb21872b36df65f37179b7b9e34794b" exitCode=0 Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.491386 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dbz5v" event={"ID":"22f61c1e-f425-4745-ab1c-b93977f1152c","Type":"ContainerDied","Data":"f1eda0329c22c2f97835dff741ea11ecafb21872b36df65f37179b7b9e34794b"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.491793 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dbz5v" event={"ID":"22f61c1e-f425-4745-ab1c-b93977f1152c","Type":"ContainerStarted","Data":"49fa6539fa9c49410a5d5d81c9eecd270e5c7e30d459d4d6a48e728258debf24"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.493067 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ctbn9" event={"ID":"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5","Type":"ContainerStarted","Data":"426ac4a9baed9b9ed3e7e4ad1ff3449a609b2d746ce76781c4ce95c7a2cac8e9"} Sep 30 10:02:12 crc kubenswrapper[4970]: I0930 10:02:12.530055 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ctbn9" podStartSLOduration=2.707307548 podStartE2EDuration="6.530035201s" podCreationTimestamp="2025-09-30 10:02:06 +0000 UTC" firstStartedPulling="2025-09-30 10:02:07.529923131 +0000 UTC m=+940.601774065" lastFinishedPulling="2025-09-30 10:02:11.352650774 +0000 UTC m=+944.424501718" observedRunningTime="2025-09-30 10:02:12.522869065 +0000 UTC m=+945.594720019" watchObservedRunningTime="2025-09-30 10:02:12.530035201 +0000 UTC m=+945.601886135" Sep 30 10:02:13 crc kubenswrapper[4970]: I0930 10:02:13.908966 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.004667 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.010132 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mn485" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.043812 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tbts\" (UniqueName: \"kubernetes.io/projected/22f61c1e-f425-4745-ab1c-b93977f1152c-kube-api-access-2tbts\") pod \"22f61c1e-f425-4745-ab1c-b93977f1152c\" (UID: \"22f61c1e-f425-4745-ab1c-b93977f1152c\") " Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.050784 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22f61c1e-f425-4745-ab1c-b93977f1152c-kube-api-access-2tbts" (OuterVolumeSpecName: "kube-api-access-2tbts") pod "22f61c1e-f425-4745-ab1c-b93977f1152c" (UID: "22f61c1e-f425-4745-ab1c-b93977f1152c"). InnerVolumeSpecName "kube-api-access-2tbts". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.105790 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.145860 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfmx6\" (UniqueName: \"kubernetes.io/projected/576a687d-6e95-4703-829e-574a84a838dd-kube-api-access-lfmx6\") pod \"576a687d-6e95-4703-829e-574a84a838dd\" (UID: \"576a687d-6e95-4703-829e-574a84a838dd\") " Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.145967 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krgs8\" (UniqueName: \"kubernetes.io/projected/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2-kube-api-access-krgs8\") pod \"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2\" (UID: \"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2\") " Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.146444 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tbts\" (UniqueName: \"kubernetes.io/projected/22f61c1e-f425-4745-ab1c-b93977f1152c-kube-api-access-2tbts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.149813 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/576a687d-6e95-4703-829e-574a84a838dd-kube-api-access-lfmx6" (OuterVolumeSpecName: "kube-api-access-lfmx6") pod "576a687d-6e95-4703-829e-574a84a838dd" (UID: "576a687d-6e95-4703-829e-574a84a838dd"). InnerVolumeSpecName "kube-api-access-lfmx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.150322 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2-kube-api-access-krgs8" (OuterVolumeSpecName: "kube-api-access-krgs8") pod "35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2" (UID: "35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2"). InnerVolumeSpecName "kube-api-access-krgs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.249147 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfmx6\" (UniqueName: \"kubernetes.io/projected/576a687d-6e95-4703-829e-574a84a838dd-kube-api-access-lfmx6\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.249178 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krgs8\" (UniqueName: \"kubernetes.io/projected/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2-kube-api-access-krgs8\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.510195 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-mn485" event={"ID":"35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2","Type":"ContainerDied","Data":"4ab30fd7474fd56892b1382f02b2b8304bdbd05be8e69fcff87211f00db75d10"} Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.510248 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ab30fd7474fd56892b1382f02b2b8304bdbd05be8e69fcff87211f00db75d10" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.510242 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-mn485" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.511566 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-dbz5v" event={"ID":"22f61c1e-f425-4745-ab1c-b93977f1152c","Type":"ContainerDied","Data":"49fa6539fa9c49410a5d5d81c9eecd270e5c7e30d459d4d6a48e728258debf24"} Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.511598 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49fa6539fa9c49410a5d5d81c9eecd270e5c7e30d459d4d6a48e728258debf24" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.511622 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-dbz5v" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.513088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-rjrqv" event={"ID":"576a687d-6e95-4703-829e-574a84a838dd","Type":"ContainerDied","Data":"3dfc1c28393df731d9e7ce95a9229110ae4e09f366918ab0095e5a44643cc2d7"} Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.513116 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dfc1c28393df731d9e7ce95a9229110ae4e09f366918ab0095e5a44643cc2d7" Sep 30 10:02:14 crc kubenswrapper[4970]: I0930 10:02:14.513158 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-rjrqv" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.001241 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.067252 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fzxhw"] Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.067537 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-fzxhw" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerName="dnsmasq-dns" containerID="cri-o://93ee17f7bc5c56770e9db93758388a717b9debb96818c5240c4b90b4a41ac9fd" gracePeriod=10 Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.542154 4970 generic.go:334] "Generic (PLEG): container finished" podID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerID="93ee17f7bc5c56770e9db93758388a717b9debb96818c5240c4b90b4a41ac9fd" exitCode=0 Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.542254 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fzxhw" event={"ID":"6f7b0129-f9df-4117-bb02-d5798f57aa8e","Type":"ContainerDied","Data":"93ee17f7bc5c56770e9db93758388a717b9debb96818c5240c4b90b4a41ac9fd"} Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.542552 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-fzxhw" event={"ID":"6f7b0129-f9df-4117-bb02-d5798f57aa8e","Type":"ContainerDied","Data":"877a11713c8ea49080497918e1519b8db0e79a97897f73876cf2dbcc9ce12d2e"} Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.542566 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877a11713c8ea49080497918e1519b8db0e79a97897f73876cf2dbcc9ce12d2e" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.547299 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.720444 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-dns-svc\") pod \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.720781 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp98r\" (UniqueName: \"kubernetes.io/projected/6f7b0129-f9df-4117-bb02-d5798f57aa8e-kube-api-access-cp98r\") pod \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.720887 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-nb\") pod \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.721983 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-sb\") pod \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.722136 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-config\") pod \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\" (UID: \"6f7b0129-f9df-4117-bb02-d5798f57aa8e\") " Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.728845 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7b0129-f9df-4117-bb02-d5798f57aa8e-kube-api-access-cp98r" (OuterVolumeSpecName: "kube-api-access-cp98r") pod "6f7b0129-f9df-4117-bb02-d5798f57aa8e" (UID: "6f7b0129-f9df-4117-bb02-d5798f57aa8e"). InnerVolumeSpecName "kube-api-access-cp98r". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.758782 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f7b0129-f9df-4117-bb02-d5798f57aa8e" (UID: "6f7b0129-f9df-4117-bb02-d5798f57aa8e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.761296 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-config" (OuterVolumeSpecName: "config") pod "6f7b0129-f9df-4117-bb02-d5798f57aa8e" (UID: "6f7b0129-f9df-4117-bb02-d5798f57aa8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.777596 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f7b0129-f9df-4117-bb02-d5798f57aa8e" (UID: "6f7b0129-f9df-4117-bb02-d5798f57aa8e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.779667 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f7b0129-f9df-4117-bb02-d5798f57aa8e" (UID: "6f7b0129-f9df-4117-bb02-d5798f57aa8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.824508 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.824544 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.824555 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.824564 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7b0129-f9df-4117-bb02-d5798f57aa8e-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:17 crc kubenswrapper[4970]: I0930 10:02:17.824574 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp98r\" (UniqueName: \"kubernetes.io/projected/6f7b0129-f9df-4117-bb02-d5798f57aa8e-kube-api-access-cp98r\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.557092 4970 generic.go:334] "Generic (PLEG): container finished" podID="9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" containerID="426ac4a9baed9b9ed3e7e4ad1ff3449a609b2d746ce76781c4ce95c7a2cac8e9" exitCode=0 Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.557147 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ctbn9" event={"ID":"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5","Type":"ContainerDied","Data":"426ac4a9baed9b9ed3e7e4ad1ff3449a609b2d746ce76781c4ce95c7a2cac8e9"} Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.557225 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-fzxhw" Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.615561 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fzxhw"] Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.621483 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-fzxhw"] Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.642498 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.650384 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/420e577e-2e62-4d35-b9c7-f354dd81add8-etc-swift\") pod \"swift-storage-0\" (UID: \"420e577e-2e62-4d35-b9c7-f354dd81add8\") " pod="openstack/swift-storage-0" Sep 30 10:02:18 crc kubenswrapper[4970]: I0930 10:02:18.707957 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.297789 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Sep 30 10:02:19 crc kubenswrapper[4970]: W0930 10:02:19.302765 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod420e577e_2e62_4d35_b9c7_f354dd81add8.slice/crio-9624eb9ea6815fd3aa33332732cdb00dd334934e3cc6d3d78f67401c7b65c43c WatchSource:0}: Error finding container 9624eb9ea6815fd3aa33332732cdb00dd334934e3cc6d3d78f67401c7b65c43c: Status 404 returned error can't find the container with id 9624eb9ea6815fd3aa33332732cdb00dd334934e3cc6d3d78f67401c7b65c43c Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.568902 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"9624eb9ea6815fd3aa33332732cdb00dd334934e3cc6d3d78f67401c7b65c43c"} Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.691769 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" path="/var/lib/kubelet/pods/6f7b0129-f9df-4117-bb02-d5798f57aa8e/volumes" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.808504 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-00fc-account-create-bwbx9"] Sep 30 10:02:19 crc kubenswrapper[4970]: E0930 10:02:19.809028 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerName="init" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809047 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerName="init" Sep 30 10:02:19 crc kubenswrapper[4970]: E0930 10:02:19.809071 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809081 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: E0930 10:02:19.809139 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerName="dnsmasq-dns" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809151 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerName="dnsmasq-dns" Sep 30 10:02:19 crc kubenswrapper[4970]: E0930 10:02:19.809166 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="576a687d-6e95-4703-829e-574a84a838dd" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809173 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="576a687d-6e95-4703-829e-574a84a838dd" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: E0930 10:02:19.809189 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22f61c1e-f425-4745-ab1c-b93977f1152c" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809200 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="22f61c1e-f425-4745-ab1c-b93977f1152c" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809456 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="576a687d-6e95-4703-829e-574a84a838dd" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809485 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f7b0129-f9df-4117-bb02-d5798f57aa8e" containerName="dnsmasq-dns" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809505 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.809519 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="22f61c1e-f425-4745-ab1c-b93977f1152c" containerName="mariadb-database-create" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.810660 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.813017 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.818771 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00fc-account-create-bwbx9"] Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.973242 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnqhr\" (UniqueName: \"kubernetes.io/projected/5387d0e5-d9fe-4032-95ae-c1d205e27e69-kube-api-access-xnqhr\") pod \"placement-00fc-account-create-bwbx9\" (UID: \"5387d0e5-d9fe-4032-95ae-c1d205e27e69\") " pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:19 crc kubenswrapper[4970]: I0930 10:02:19.973891 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.078442 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-ring-data-devices\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.082058 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwlc2\" (UniqueName: \"kubernetes.io/projected/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-kube-api-access-hwlc2\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.079490 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.082392 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-scripts\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.082436 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-dispersionconf\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.082781 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-combined-ca-bundle\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.082834 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-swiftconf\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.082871 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-etc-swift\") pod \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\" (UID: \"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5\") " Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.084259 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnqhr\" (UniqueName: \"kubernetes.io/projected/5387d0e5-d9fe-4032-95ae-c1d205e27e69-kube-api-access-xnqhr\") pod \"placement-00fc-account-create-bwbx9\" (UID: \"5387d0e5-d9fe-4032-95ae-c1d205e27e69\") " pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.084350 4970 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-ring-data-devices\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.086174 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.108281 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-42b0-account-create-7fpbl"] Sep 30 10:02:20 crc kubenswrapper[4970]: E0930 10:02:20.108926 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" containerName="swift-ring-rebalance" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.108943 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" containerName="swift-ring-rebalance" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.109155 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" containerName="swift-ring-rebalance" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.109913 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.112809 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.113256 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-kube-api-access-hwlc2" (OuterVolumeSpecName: "kube-api-access-hwlc2") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "kube-api-access-hwlc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.116632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-scripts" (OuterVolumeSpecName: "scripts") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.120161 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.120968 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnqhr\" (UniqueName: \"kubernetes.io/projected/5387d0e5-d9fe-4032-95ae-c1d205e27e69-kube-api-access-xnqhr\") pod \"placement-00fc-account-create-bwbx9\" (UID: \"5387d0e5-d9fe-4032-95ae-c1d205e27e69\") " pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.124663 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-42b0-account-create-7fpbl"] Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.126658 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.135909 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5" (UID: "9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.136155 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.186188 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwlc2\" (UniqueName: \"kubernetes.io/projected/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-kube-api-access-hwlc2\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.186214 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.186224 4970 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-dispersionconf\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.186238 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.186249 4970 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-swiftconf\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.186261 4970 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5-etc-swift\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.288364 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj47s\" (UniqueName: \"kubernetes.io/projected/2ec35f86-ee7a-432a-a69f-3235fa387bf6-kube-api-access-pj47s\") pod \"glance-42b0-account-create-7fpbl\" (UID: \"2ec35f86-ee7a-432a-a69f-3235fa387bf6\") " pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.390106 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj47s\" (UniqueName: \"kubernetes.io/projected/2ec35f86-ee7a-432a-a69f-3235fa387bf6-kube-api-access-pj47s\") pod \"glance-42b0-account-create-7fpbl\" (UID: \"2ec35f86-ee7a-432a-a69f-3235fa387bf6\") " pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.417928 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj47s\" (UniqueName: \"kubernetes.io/projected/2ec35f86-ee7a-432a-a69f-3235fa387bf6-kube-api-access-pj47s\") pod \"glance-42b0-account-create-7fpbl\" (UID: \"2ec35f86-ee7a-432a-a69f-3235fa387bf6\") " pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.495939 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.578009 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ctbn9" event={"ID":"9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5","Type":"ContainerDied","Data":"575fc1112c42f67da1cddd6215c376fa6a896ce93422258f1dd8d12d489ea576"} Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.578055 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="575fc1112c42f67da1cddd6215c376fa6a896ce93422258f1dd8d12d489ea576" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.578124 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ctbn9" Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.787147 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-00fc-account-create-bwbx9"] Sep 30 10:02:20 crc kubenswrapper[4970]: W0930 10:02:20.798094 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5387d0e5_d9fe_4032_95ae_c1d205e27e69.slice/crio-dee1f76e993bd11e63e28c5c310a00d0c8e28205cbb7a7968b6d232108228620 WatchSource:0}: Error finding container dee1f76e993bd11e63e28c5c310a00d0c8e28205cbb7a7968b6d232108228620: Status 404 returned error can't find the container with id dee1f76e993bd11e63e28c5c310a00d0c8e28205cbb7a7968b6d232108228620 Sep 30 10:02:20 crc kubenswrapper[4970]: I0930 10:02:20.983664 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-42b0-account-create-7fpbl"] Sep 30 10:02:20 crc kubenswrapper[4970]: W0930 10:02:20.997370 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ec35f86_ee7a_432a_a69f_3235fa387bf6.slice/crio-7fb1e6bfb345a7625b73a6479b82d7539b8a97abbc3efe3c12a0793fd1895cc0 WatchSource:0}: Error finding container 7fb1e6bfb345a7625b73a6479b82d7539b8a97abbc3efe3c12a0793fd1895cc0: Status 404 returned error can't find the container with id 7fb1e6bfb345a7625b73a6479b82d7539b8a97abbc3efe3c12a0793fd1895cc0 Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.594194 4970 generic.go:334] "Generic (PLEG): container finished" podID="5387d0e5-d9fe-4032-95ae-c1d205e27e69" containerID="d7432d097ba59dae17e86b9d0fbed9165674ecca90ef48a794c302207a8022c1" exitCode=0 Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.594552 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00fc-account-create-bwbx9" event={"ID":"5387d0e5-d9fe-4032-95ae-c1d205e27e69","Type":"ContainerDied","Data":"d7432d097ba59dae17e86b9d0fbed9165674ecca90ef48a794c302207a8022c1"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.594587 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00fc-account-create-bwbx9" event={"ID":"5387d0e5-d9fe-4032-95ae-c1d205e27e69","Type":"ContainerStarted","Data":"dee1f76e993bd11e63e28c5c310a00d0c8e28205cbb7a7968b6d232108228620"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.617527 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"46ead34461d746dbe4e12a726e3b677c7a61fcda43078746e46dafb85590834c"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.617585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"62045ca6733e8cf1a543ae05f0210ae880c30b8b23f2b822f8c1c23aadb53a6d"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.617597 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"ebdd844b3f611474a5fb46eea2efd37d5b4309dae96e08109c7f11b4dffd470d"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.617604 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"eb8ae64186933fa10563339d5d9cc2c6145672f9b30246f577714e2aad5c5ff1"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.621337 4970 generic.go:334] "Generic (PLEG): container finished" podID="2ec35f86-ee7a-432a-a69f-3235fa387bf6" containerID="aa433e908051c68065de335091661717af966d385d6ae0ca6c967e0a4e2f32ac" exitCode=0 Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.621378 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-42b0-account-create-7fpbl" event={"ID":"2ec35f86-ee7a-432a-a69f-3235fa387bf6","Type":"ContainerDied","Data":"aa433e908051c68065de335091661717af966d385d6ae0ca6c967e0a4e2f32ac"} Sep 30 10:02:21 crc kubenswrapper[4970]: I0930 10:02:21.621400 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-42b0-account-create-7fpbl" event={"ID":"2ec35f86-ee7a-432a-a69f-3235fa387bf6","Type":"ContainerStarted","Data":"7fb1e6bfb345a7625b73a6479b82d7539b8a97abbc3efe3c12a0793fd1895cc0"} Sep 30 10:02:22 crc kubenswrapper[4970]: I0930 10:02:22.631989 4970 generic.go:334] "Generic (PLEG): container finished" podID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerID="0fa72e8e7de3abae746e76bd2c494053bb66770c7d210235abd107e7a49c3403" exitCode=0 Sep 30 10:02:22 crc kubenswrapper[4970]: I0930 10:02:22.632033 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0b2bd53-a874-4d92-b137-24f772f5d61c","Type":"ContainerDied","Data":"0fa72e8e7de3abae746e76bd2c494053bb66770c7d210235abd107e7a49c3403"} Sep 30 10:02:22 crc kubenswrapper[4970]: I0930 10:02:22.635416 4970 generic.go:334] "Generic (PLEG): container finished" podID="7bc5f72b-8b51-4a55-971a-83135118e627" containerID="bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194" exitCode=0 Sep 30 10:02:22 crc kubenswrapper[4970]: I0930 10:02:22.635488 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7bc5f72b-8b51-4a55-971a-83135118e627","Type":"ContainerDied","Data":"bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194"} Sep 30 10:02:22 crc kubenswrapper[4970]: I0930 10:02:22.638588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"1dc7eb1b7f48cde41b33dace56ebe8d11bff06b9754f0b498f8925703c444305"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.005875 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.118391 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.158848 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj47s\" (UniqueName: \"kubernetes.io/projected/2ec35f86-ee7a-432a-a69f-3235fa387bf6-kube-api-access-pj47s\") pod \"2ec35f86-ee7a-432a-a69f-3235fa387bf6\" (UID: \"2ec35f86-ee7a-432a-a69f-3235fa387bf6\") " Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.164664 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ec35f86-ee7a-432a-a69f-3235fa387bf6-kube-api-access-pj47s" (OuterVolumeSpecName: "kube-api-access-pj47s") pod "2ec35f86-ee7a-432a-a69f-3235fa387bf6" (UID: "2ec35f86-ee7a-432a-a69f-3235fa387bf6"). InnerVolumeSpecName "kube-api-access-pj47s". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.261958 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnqhr\" (UniqueName: \"kubernetes.io/projected/5387d0e5-d9fe-4032-95ae-c1d205e27e69-kube-api-access-xnqhr\") pod \"5387d0e5-d9fe-4032-95ae-c1d205e27e69\" (UID: \"5387d0e5-d9fe-4032-95ae-c1d205e27e69\") " Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.262834 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj47s\" (UniqueName: \"kubernetes.io/projected/2ec35f86-ee7a-432a-a69f-3235fa387bf6-kube-api-access-pj47s\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.281269 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5387d0e5-d9fe-4032-95ae-c1d205e27e69-kube-api-access-xnqhr" (OuterVolumeSpecName: "kube-api-access-xnqhr") pod "5387d0e5-d9fe-4032-95ae-c1d205e27e69" (UID: "5387d0e5-d9fe-4032-95ae-c1d205e27e69"). InnerVolumeSpecName "kube-api-access-xnqhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.364715 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnqhr\" (UniqueName: \"kubernetes.io/projected/5387d0e5-d9fe-4032-95ae-c1d205e27e69-kube-api-access-xnqhr\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.650766 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7bc5f72b-8b51-4a55-971a-83135118e627","Type":"ContainerStarted","Data":"212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.651116 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.652734 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-00fc-account-create-bwbx9" event={"ID":"5387d0e5-d9fe-4032-95ae-c1d205e27e69","Type":"ContainerDied","Data":"dee1f76e993bd11e63e28c5c310a00d0c8e28205cbb7a7968b6d232108228620"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.652757 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dee1f76e993bd11e63e28c5c310a00d0c8e28205cbb7a7968b6d232108228620" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.653143 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-00fc-account-create-bwbx9" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.656817 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"e41c3785bdbeea169cd265fc8d68923fe2e472bde5b46b0e5e7febc1ee6757cb"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.656850 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"17df03fad6ab993c337f5a7e2e7ab35ddd984cec07b59806988657f0bdb8a57c"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.656860 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"2b4f6c3d4a22b86f9aed00bff0b089cbc87996e5b1cec17bb76397e18aec419d"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.658125 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-42b0-account-create-7fpbl" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.658133 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-42b0-account-create-7fpbl" event={"ID":"2ec35f86-ee7a-432a-a69f-3235fa387bf6","Type":"ContainerDied","Data":"7fb1e6bfb345a7625b73a6479b82d7539b8a97abbc3efe3c12a0793fd1895cc0"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.658174 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb1e6bfb345a7625b73a6479b82d7539b8a97abbc3efe3c12a0793fd1895cc0" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.662327 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0b2bd53-a874-4d92-b137-24f772f5d61c","Type":"ContainerStarted","Data":"e607da78e75bc80d704d1f6cde7909bdbc75e630bb1b648b81a03c52611b95a5"} Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.662594 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:02:23 crc kubenswrapper[4970]: I0930 10:02:23.690678 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=51.506565405 podStartE2EDuration="58.690649925s" podCreationTimestamp="2025-09-30 10:01:25 +0000 UTC" firstStartedPulling="2025-09-30 10:01:44.468354719 +0000 UTC m=+917.540205653" lastFinishedPulling="2025-09-30 10:01:51.652439239 +0000 UTC m=+924.724290173" observedRunningTime="2025-09-30 10:02:23.68388884 +0000 UTC m=+956.755739794" watchObservedRunningTime="2025-09-30 10:02:23.690649925 +0000 UTC m=+956.762500879" Sep 30 10:02:24 crc kubenswrapper[4970]: I0930 10:02:24.046449 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=51.835480779 podStartE2EDuration="59.046424844s" podCreationTimestamp="2025-09-30 10:01:25 +0000 UTC" firstStartedPulling="2025-09-30 10:01:44.476328848 +0000 UTC m=+917.548179782" lastFinishedPulling="2025-09-30 10:01:51.687272903 +0000 UTC m=+924.759123847" observedRunningTime="2025-09-30 10:02:23.721407397 +0000 UTC m=+956.793258341" watchObservedRunningTime="2025-09-30 10:02:24.046424844 +0000 UTC m=+957.118275778" Sep 30 10:02:24 crc kubenswrapper[4970]: I0930 10:02:24.677733 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"9cf18305c54dd42385d48c175823c762a26d377ead2d219785f01a4d3e6548ce"} Sep 30 10:02:24 crc kubenswrapper[4970]: I0930 10:02:24.678316 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"650246253ee22f78adb01b5b7c5eaffc0661153e369a5b91be3f39155cd3fe4b"} Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.018596 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vtdnt" podUID="ea8f06d0-75e0-4ed8-9e37-086886b019e5" containerName="ovn-controller" probeResult="failure" output=< Sep 30 10:02:25 crc kubenswrapper[4970]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Sep 30 10:02:25 crc kubenswrapper[4970]: > Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.086090 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.087026 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-h8572" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.266605 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-s8gsl"] Sep 30 10:02:25 crc kubenswrapper[4970]: E0930 10:02:25.267902 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5387d0e5-d9fe-4032-95ae-c1d205e27e69" containerName="mariadb-account-create" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.268009 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="5387d0e5-d9fe-4032-95ae-c1d205e27e69" containerName="mariadb-account-create" Sep 30 10:02:25 crc kubenswrapper[4970]: E0930 10:02:25.268121 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ec35f86-ee7a-432a-a69f-3235fa387bf6" containerName="mariadb-account-create" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.268203 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ec35f86-ee7a-432a-a69f-3235fa387bf6" containerName="mariadb-account-create" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.268491 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ec35f86-ee7a-432a-a69f-3235fa387bf6" containerName="mariadb-account-create" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.268631 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="5387d0e5-d9fe-4032-95ae-c1d205e27e69" containerName="mariadb-account-create" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.270711 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.273308 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nxvpd" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.273846 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.289933 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-s8gsl"] Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.349347 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vtdnt-config-zpcgz"] Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.350549 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.353167 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.372966 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vtdnt-config-zpcgz"] Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.402826 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-log-ovn\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403236 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-combined-ca-bundle\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403331 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwf7\" (UniqueName: \"kubernetes.io/projected/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-kube-api-access-hgwf7\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403427 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-scripts\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403513 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run-ovn\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403601 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403683 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9tdg\" (UniqueName: \"kubernetes.io/projected/36a997b6-faa7-4e1d-b5db-ab407fe84792-kube-api-access-k9tdg\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403769 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-config-data\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403861 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-db-sync-config-data\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.403996 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-additional-scripts\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.506294 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-additional-scripts\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.506954 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-log-ovn\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.507032 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-additional-scripts\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.506636 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-log-ovn\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.507262 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-combined-ca-bundle\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.508366 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwf7\" (UniqueName: \"kubernetes.io/projected/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-kube-api-access-hgwf7\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.508506 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-scripts\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.508665 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run-ovn\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.508782 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.508894 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9tdg\" (UniqueName: \"kubernetes.io/projected/36a997b6-faa7-4e1d-b5db-ab407fe84792-kube-api-access-k9tdg\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.508977 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-config-data\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.509122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-db-sync-config-data\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.509611 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.509708 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run-ovn\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.511061 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-scripts\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.513731 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-combined-ca-bundle\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.514060 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-db-sync-config-data\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.514086 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-config-data\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.526607 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9tdg\" (UniqueName: \"kubernetes.io/projected/36a997b6-faa7-4e1d-b5db-ab407fe84792-kube-api-access-k9tdg\") pod \"ovn-controller-vtdnt-config-zpcgz\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.547548 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwf7\" (UniqueName: \"kubernetes.io/projected/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-kube-api-access-hgwf7\") pod \"glance-db-sync-s8gsl\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.598866 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.672775 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.730279 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"927dff1693732fa944a45b0e027c2a8988357fe43a06d3f1e61e547c90f3db41"} Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.730574 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"f6c74709c943b5da95c62539fb61abce7d5b18ee1651fe2f247ac2131105f9e4"} Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.730585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"7da30bf9b652f7674886ebb22785957a51c68094eb1f1cc49782c0452b03e236"} Sep 30 10:02:25 crc kubenswrapper[4970]: I0930 10:02:25.730595 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"42cfc323b5f62053acb8ccbae2c3b2473e3e1c984e91291cd0089752d8fc937c"} Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.225174 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-s8gsl"] Sep 30 10:02:26 crc kubenswrapper[4970]: W0930 10:02:26.230033 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda77c6408_e115_4a90_bba3_2d5c64f2c8a3.slice/crio-551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587 WatchSource:0}: Error finding container 551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587: Status 404 returned error can't find the container with id 551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587 Sep 30 10:02:26 crc kubenswrapper[4970]: W0930 10:02:26.280912 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36a997b6_faa7_4e1d_b5db_ab407fe84792.slice/crio-7c83da536b40f2fd662bf9dce19ade9fd8f9e23cab18e9ff2e177f8059112c23 WatchSource:0}: Error finding container 7c83da536b40f2fd662bf9dce19ade9fd8f9e23cab18e9ff2e177f8059112c23: Status 404 returned error can't find the container with id 7c83da536b40f2fd662bf9dce19ade9fd8f9e23cab18e9ff2e177f8059112c23 Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.287117 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vtdnt-config-zpcgz"] Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.737765 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s8gsl" event={"ID":"a77c6408-e115-4a90-bba3-2d5c64f2c8a3","Type":"ContainerStarted","Data":"551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587"} Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.745449 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"420e577e-2e62-4d35-b9c7-f354dd81add8","Type":"ContainerStarted","Data":"4c68a92e85ea285da380c336166b7de6008761ada4f71a39bca4332f6a9e4972"} Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.747480 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-zpcgz" event={"ID":"36a997b6-faa7-4e1d-b5db-ab407fe84792","Type":"ContainerStarted","Data":"fd9724582e291160c5c6e6de21954648ffcb91e3259574943b5aedcba22f4786"} Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.747542 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-zpcgz" event={"ID":"36a997b6-faa7-4e1d-b5db-ab407fe84792","Type":"ContainerStarted","Data":"7c83da536b40f2fd662bf9dce19ade9fd8f9e23cab18e9ff2e177f8059112c23"} Sep 30 10:02:26 crc kubenswrapper[4970]: I0930 10:02:26.782033 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.881497545 podStartE2EDuration="25.78198837s" podCreationTimestamp="2025-09-30 10:02:01 +0000 UTC" firstStartedPulling="2025-09-30 10:02:19.305768204 +0000 UTC m=+952.377619148" lastFinishedPulling="2025-09-30 10:02:24.206259039 +0000 UTC m=+957.278109973" observedRunningTime="2025-09-30 10:02:26.778850774 +0000 UTC m=+959.850701708" watchObservedRunningTime="2025-09-30 10:02:26.78198837 +0000 UTC m=+959.853839314" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.157143 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-x6dpl"] Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.158699 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.176541 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-x6dpl"] Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.189350 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.248244 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.248327 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxtgz\" (UniqueName: \"kubernetes.io/projected/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-kube-api-access-rxtgz\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.248356 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.248394 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.248419 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.248441 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.350069 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxtgz\" (UniqueName: \"kubernetes.io/projected/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-kube-api-access-rxtgz\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.350130 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.350179 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.350205 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.350228 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.350292 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.351217 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.352155 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.352687 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.353229 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.353669 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.383694 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxtgz\" (UniqueName: \"kubernetes.io/projected/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-kube-api-access-rxtgz\") pod \"dnsmasq-dns-6d5b6d6b67-x6dpl\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.486151 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.758941 4970 generic.go:334] "Generic (PLEG): container finished" podID="36a997b6-faa7-4e1d-b5db-ab407fe84792" containerID="fd9724582e291160c5c6e6de21954648ffcb91e3259574943b5aedcba22f4786" exitCode=0 Sep 30 10:02:27 crc kubenswrapper[4970]: I0930 10:02:27.759013 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-zpcgz" event={"ID":"36a997b6-faa7-4e1d-b5db-ab407fe84792","Type":"ContainerDied","Data":"fd9724582e291160c5c6e6de21954648ffcb91e3259574943b5aedcba22f4786"} Sep 30 10:02:28 crc kubenswrapper[4970]: I0930 10:02:28.023456 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-x6dpl"] Sep 30 10:02:28 crc kubenswrapper[4970]: I0930 10:02:28.767934 4970 generic.go:334] "Generic (PLEG): container finished" podID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerID="3c739f81b3d82acc27285f57dd7ebe38925896c69ca9b1c487249e728e58fd46" exitCode=0 Sep 30 10:02:28 crc kubenswrapper[4970]: I0930 10:02:28.768024 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" event={"ID":"bf98d493-f44f-4cd9-ab10-4a5a132ce94f","Type":"ContainerDied","Data":"3c739f81b3d82acc27285f57dd7ebe38925896c69ca9b1c487249e728e58fd46"} Sep 30 10:02:28 crc kubenswrapper[4970]: I0930 10:02:28.768291 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" event={"ID":"bf98d493-f44f-4cd9-ab10-4a5a132ce94f","Type":"ContainerStarted","Data":"e52081279786d28c2c2ec621e222f27ac2e7cf218d315edee5cb2750d6a78087"} Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.106610 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.204883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run-ovn\") pod \"36a997b6-faa7-4e1d-b5db-ab407fe84792\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205031 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "36a997b6-faa7-4e1d-b5db-ab407fe84792" (UID: "36a997b6-faa7-4e1d-b5db-ab407fe84792"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205191 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-scripts\") pod \"36a997b6-faa7-4e1d-b5db-ab407fe84792\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205256 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-additional-scripts\") pod \"36a997b6-faa7-4e1d-b5db-ab407fe84792\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205325 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-log-ovn\") pod \"36a997b6-faa7-4e1d-b5db-ab407fe84792\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205352 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run\") pod \"36a997b6-faa7-4e1d-b5db-ab407fe84792\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205466 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "36a997b6-faa7-4e1d-b5db-ab407fe84792" (UID: "36a997b6-faa7-4e1d-b5db-ab407fe84792"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205487 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9tdg\" (UniqueName: \"kubernetes.io/projected/36a997b6-faa7-4e1d-b5db-ab407fe84792-kube-api-access-k9tdg\") pod \"36a997b6-faa7-4e1d-b5db-ab407fe84792\" (UID: \"36a997b6-faa7-4e1d-b5db-ab407fe84792\") " Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.205566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run" (OuterVolumeSpecName: "var-run") pod "36a997b6-faa7-4e1d-b5db-ab407fe84792" (UID: "36a997b6-faa7-4e1d-b5db-ab407fe84792"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.206876 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "36a997b6-faa7-4e1d-b5db-ab407fe84792" (UID: "36a997b6-faa7-4e1d-b5db-ab407fe84792"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.207482 4970 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.207516 4970 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.207533 4970 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.207549 4970 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36a997b6-faa7-4e1d-b5db-ab407fe84792-var-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.209457 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-scripts" (OuterVolumeSpecName: "scripts") pod "36a997b6-faa7-4e1d-b5db-ab407fe84792" (UID: "36a997b6-faa7-4e1d-b5db-ab407fe84792"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.209688 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36a997b6-faa7-4e1d-b5db-ab407fe84792-kube-api-access-k9tdg" (OuterVolumeSpecName: "kube-api-access-k9tdg") pod "36a997b6-faa7-4e1d-b5db-ab407fe84792" (UID: "36a997b6-faa7-4e1d-b5db-ab407fe84792"). InnerVolumeSpecName "kube-api-access-k9tdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.311708 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36a997b6-faa7-4e1d-b5db-ab407fe84792-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.311805 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9tdg\" (UniqueName: \"kubernetes.io/projected/36a997b6-faa7-4e1d-b5db-ab407fe84792-kube-api-access-k9tdg\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.466765 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e136-account-create-p4txc"] Sep 30 10:02:29 crc kubenswrapper[4970]: E0930 10:02:29.467205 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36a997b6-faa7-4e1d-b5db-ab407fe84792" containerName="ovn-config" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.467224 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="36a997b6-faa7-4e1d-b5db-ab407fe84792" containerName="ovn-config" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.467406 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="36a997b6-faa7-4e1d-b5db-ab407fe84792" containerName="ovn-config" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.468149 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.471606 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.481671 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e136-account-create-p4txc"] Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.516252 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cszq\" (UniqueName: \"kubernetes.io/projected/fe24c936-58db-48a2-8cf5-ad1ee7473ef8-kube-api-access-5cszq\") pod \"keystone-e136-account-create-p4txc\" (UID: \"fe24c936-58db-48a2-8cf5-ad1ee7473ef8\") " pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.618231 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cszq\" (UniqueName: \"kubernetes.io/projected/fe24c936-58db-48a2-8cf5-ad1ee7473ef8-kube-api-access-5cszq\") pod \"keystone-e136-account-create-p4txc\" (UID: \"fe24c936-58db-48a2-8cf5-ad1ee7473ef8\") " pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.636603 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cszq\" (UniqueName: \"kubernetes.io/projected/fe24c936-58db-48a2-8cf5-ad1ee7473ef8-kube-api-access-5cszq\") pod \"keystone-e136-account-create-p4txc\" (UID: \"fe24c936-58db-48a2-8cf5-ad1ee7473ef8\") " pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.780391 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-zpcgz" event={"ID":"36a997b6-faa7-4e1d-b5db-ab407fe84792","Type":"ContainerDied","Data":"7c83da536b40f2fd662bf9dce19ade9fd8f9e23cab18e9ff2e177f8059112c23"} Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.780442 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-zpcgz" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.780446 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c83da536b40f2fd662bf9dce19ade9fd8f9e23cab18e9ff2e177f8059112c23" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.783689 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" event={"ID":"bf98d493-f44f-4cd9-ab10-4a5a132ce94f","Type":"ContainerStarted","Data":"1e25e16d357f4992d1d3716ee20aeb8544e0462676b347425ccf853363c5e9d7"} Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.783918 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.789573 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:29 crc kubenswrapper[4970]: I0930 10:02:29.809656 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" podStartSLOduration=2.809634401 podStartE2EDuration="2.809634401s" podCreationTimestamp="2025-09-30 10:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:29.805113928 +0000 UTC m=+962.876964882" watchObservedRunningTime="2025-09-30 10:02:29.809634401 +0000 UTC m=+962.881485335" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.035863 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vtdnt" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.220264 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vtdnt-config-zpcgz"] Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.227871 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vtdnt-config-zpcgz"] Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.285975 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e136-account-create-p4txc"] Sep 30 10:02:30 crc kubenswrapper[4970]: W0930 10:02:30.294406 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe24c936_58db_48a2_8cf5_ad1ee7473ef8.slice/crio-ae0aa20f9d23b0213eb6db6238eb58f5d95ab14e95aeb4add1a1296d93a7671c WatchSource:0}: Error finding container ae0aa20f9d23b0213eb6db6238eb58f5d95ab14e95aeb4add1a1296d93a7671c: Status 404 returned error can't find the container with id ae0aa20f9d23b0213eb6db6238eb58f5d95ab14e95aeb4add1a1296d93a7671c Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.329950 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vtdnt-config-r4p8g"] Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.335213 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.339413 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.354169 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vtdnt-config-r4p8g"] Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.434401 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run-ovn\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.434476 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-scripts\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.434549 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.434776 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-log-ovn\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.434908 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-additional-scripts\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.435041 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kdrd\" (UniqueName: \"kubernetes.io/projected/d47fe503-70cd-4fdf-a09c-2eba075cce1b-kube-api-access-9kdrd\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.537481 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kdrd\" (UniqueName: \"kubernetes.io/projected/d47fe503-70cd-4fdf-a09c-2eba075cce1b-kube-api-access-9kdrd\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.537603 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run-ovn\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.537649 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-scripts\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.537696 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.537733 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-log-ovn\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.537760 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-additional-scripts\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.538128 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run-ovn\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.538165 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.538243 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-log-ovn\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.538641 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-additional-scripts\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.540009 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-scripts\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.561628 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kdrd\" (UniqueName: \"kubernetes.io/projected/d47fe503-70cd-4fdf-a09c-2eba075cce1b-kube-api-access-9kdrd\") pod \"ovn-controller-vtdnt-config-r4p8g\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.656936 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.793447 4970 generic.go:334] "Generic (PLEG): container finished" podID="fe24c936-58db-48a2-8cf5-ad1ee7473ef8" containerID="2ee88dfa499ba208108ed7624b0017f21883521e745056e02d4b216e4ee44051" exitCode=0 Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.793540 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e136-account-create-p4txc" event={"ID":"fe24c936-58db-48a2-8cf5-ad1ee7473ef8","Type":"ContainerDied","Data":"2ee88dfa499ba208108ed7624b0017f21883521e745056e02d4b216e4ee44051"} Sep 30 10:02:30 crc kubenswrapper[4970]: I0930 10:02:30.793860 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e136-account-create-p4txc" event={"ID":"fe24c936-58db-48a2-8cf5-ad1ee7473ef8","Type":"ContainerStarted","Data":"ae0aa20f9d23b0213eb6db6238eb58f5d95ab14e95aeb4add1a1296d93a7671c"} Sep 30 10:02:31 crc kubenswrapper[4970]: I0930 10:02:31.164651 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vtdnt-config-r4p8g"] Sep 30 10:02:31 crc kubenswrapper[4970]: W0930 10:02:31.170338 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd47fe503_70cd_4fdf_a09c_2eba075cce1b.slice/crio-0ed23bebd3a9836e67a5787a50c45a932d697b56211d95e632f695ad7808cb0a WatchSource:0}: Error finding container 0ed23bebd3a9836e67a5787a50c45a932d697b56211d95e632f695ad7808cb0a: Status 404 returned error can't find the container with id 0ed23bebd3a9836e67a5787a50c45a932d697b56211d95e632f695ad7808cb0a Sep 30 10:02:31 crc kubenswrapper[4970]: I0930 10:02:31.681663 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36a997b6-faa7-4e1d-b5db-ab407fe84792" path="/var/lib/kubelet/pods/36a997b6-faa7-4e1d-b5db-ab407fe84792/volumes" Sep 30 10:02:31 crc kubenswrapper[4970]: E0930 10:02:31.795739 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd47fe503_70cd_4fdf_a09c_2eba075cce1b.slice/crio-conmon-36ca95c0bced24be37cb08569f86c977c97caa9f1602bce9e6315a4c0b6d22f6.scope\": RecentStats: unable to find data in memory cache]" Sep 30 10:02:31 crc kubenswrapper[4970]: I0930 10:02:31.805466 4970 generic.go:334] "Generic (PLEG): container finished" podID="d47fe503-70cd-4fdf-a09c-2eba075cce1b" containerID="36ca95c0bced24be37cb08569f86c977c97caa9f1602bce9e6315a4c0b6d22f6" exitCode=0 Sep 30 10:02:31 crc kubenswrapper[4970]: I0930 10:02:31.805549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-r4p8g" event={"ID":"d47fe503-70cd-4fdf-a09c-2eba075cce1b","Type":"ContainerDied","Data":"36ca95c0bced24be37cb08569f86c977c97caa9f1602bce9e6315a4c0b6d22f6"} Sep 30 10:02:31 crc kubenswrapper[4970]: I0930 10:02:31.805643 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-r4p8g" event={"ID":"d47fe503-70cd-4fdf-a09c-2eba075cce1b","Type":"ContainerStarted","Data":"0ed23bebd3a9836e67a5787a50c45a932d697b56211d95e632f695ad7808cb0a"} Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.140258 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.168817 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cszq\" (UniqueName: \"kubernetes.io/projected/fe24c936-58db-48a2-8cf5-ad1ee7473ef8-kube-api-access-5cszq\") pod \"fe24c936-58db-48a2-8cf5-ad1ee7473ef8\" (UID: \"fe24c936-58db-48a2-8cf5-ad1ee7473ef8\") " Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.206303 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe24c936-58db-48a2-8cf5-ad1ee7473ef8-kube-api-access-5cszq" (OuterVolumeSpecName: "kube-api-access-5cszq") pod "fe24c936-58db-48a2-8cf5-ad1ee7473ef8" (UID: "fe24c936-58db-48a2-8cf5-ad1ee7473ef8"). InnerVolumeSpecName "kube-api-access-5cszq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.271971 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cszq\" (UniqueName: \"kubernetes.io/projected/fe24c936-58db-48a2-8cf5-ad1ee7473ef8-kube-api-access-5cszq\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.816415 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e136-account-create-p4txc" Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.816374 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e136-account-create-p4txc" event={"ID":"fe24c936-58db-48a2-8cf5-ad1ee7473ef8","Type":"ContainerDied","Data":"ae0aa20f9d23b0213eb6db6238eb58f5d95ab14e95aeb4add1a1296d93a7671c"} Sep 30 10:02:32 crc kubenswrapper[4970]: I0930 10:02:32.817550 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae0aa20f9d23b0213eb6db6238eb58f5d95ab14e95aeb4add1a1296d93a7671c" Sep 30 10:02:36 crc kubenswrapper[4970]: I0930 10:02:36.494877 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:02:36 crc kubenswrapper[4970]: I0930 10:02:36.811178 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 30 10:02:37 crc kubenswrapper[4970]: I0930 10:02:37.487380 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:02:37 crc kubenswrapper[4970]: I0930 10:02:37.559241 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ghm66"] Sep 30 10:02:37 crc kubenswrapper[4970]: I0930 10:02:37.559575 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" podUID="9227c3d4-a39d-4316-9153-157469a2d006" containerName="dnsmasq-dns" containerID="cri-o://80faabc3fd5d18710cefdae7723e0c7061d9dc1450004a3e1ccae13c10a5d8a6" gracePeriod=10 Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.390835 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-mkvnp"] Sep 30 10:02:38 crc kubenswrapper[4970]: E0930 10:02:38.391566 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe24c936-58db-48a2-8cf5-ad1ee7473ef8" containerName="mariadb-account-create" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.391584 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe24c936-58db-48a2-8cf5-ad1ee7473ef8" containerName="mariadb-account-create" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.391769 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe24c936-58db-48a2-8cf5-ad1ee7473ef8" containerName="mariadb-account-create" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.392407 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.400315 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mkvnp"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.477640 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-df4fk"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.487145 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-df4fk"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.487259 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.514271 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7zp8\" (UniqueName: \"kubernetes.io/projected/2e97f0e8-a17c-47b6-ae38-ac69404b9b01-kube-api-access-g7zp8\") pod \"cinder-db-create-mkvnp\" (UID: \"2e97f0e8-a17c-47b6-ae38-ac69404b9b01\") " pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.616180 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7zp8\" (UniqueName: \"kubernetes.io/projected/2e97f0e8-a17c-47b6-ae38-ac69404b9b01-kube-api-access-g7zp8\") pod \"cinder-db-create-mkvnp\" (UID: \"2e97f0e8-a17c-47b6-ae38-ac69404b9b01\") " pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.617463 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hzwc\" (UniqueName: \"kubernetes.io/projected/21283459-5e37-4ebb-9f88-4ddfb5b3dc79-kube-api-access-8hzwc\") pod \"barbican-db-create-df4fk\" (UID: \"21283459-5e37-4ebb-9f88-4ddfb5b3dc79\") " pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.642707 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7zp8\" (UniqueName: \"kubernetes.io/projected/2e97f0e8-a17c-47b6-ae38-ac69404b9b01-kube-api-access-g7zp8\") pod \"cinder-db-create-mkvnp\" (UID: \"2e97f0e8-a17c-47b6-ae38-ac69404b9b01\") " pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.645508 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ksbl9"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.646666 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.650037 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.651048 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.651390 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.654636 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kcbp" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.699591 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ksbl9"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.721260 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-config-data\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.721397 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hzwc\" (UniqueName: \"kubernetes.io/projected/21283459-5e37-4ebb-9f88-4ddfb5b3dc79-kube-api-access-8hzwc\") pod \"barbican-db-create-df4fk\" (UID: \"21283459-5e37-4ebb-9f88-4ddfb5b3dc79\") " pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.723887 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.725269 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5bfc\" (UniqueName: \"kubernetes.io/projected/0f09121d-1a78-4a2a-90ca-90aa15067ebf-kube-api-access-c5bfc\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.725342 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-combined-ca-bundle\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.737789 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-cl74x"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.739686 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.744474 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cl74x"] Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.768514 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.784958 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hzwc\" (UniqueName: \"kubernetes.io/projected/21283459-5e37-4ebb-9f88-4ddfb5b3dc79-kube-api-access-8hzwc\") pod \"barbican-db-create-df4fk\" (UID: \"21283459-5e37-4ebb-9f88-4ddfb5b3dc79\") " pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.809702 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.828820 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run\") pod \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.828962 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kdrd\" (UniqueName: \"kubernetes.io/projected/d47fe503-70cd-4fdf-a09c-2eba075cce1b-kube-api-access-9kdrd\") pod \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829014 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run-ovn\") pod \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829046 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-log-ovn\") pod \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829264 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-additional-scripts\") pod \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829291 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-scripts\") pod \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\" (UID: \"d47fe503-70cd-4fdf-a09c-2eba075cce1b\") " Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829574 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g4n9\" (UniqueName: \"kubernetes.io/projected/f26d3109-a89d-4787-899e-a370559a9f42-kube-api-access-2g4n9\") pod \"neutron-db-create-cl74x\" (UID: \"f26d3109-a89d-4787-899e-a370559a9f42\") " pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829713 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-config-data\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829773 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5bfc\" (UniqueName: \"kubernetes.io/projected/0f09121d-1a78-4a2a-90ca-90aa15067ebf-kube-api-access-c5bfc\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.829808 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-combined-ca-bundle\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.831616 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "d47fe503-70cd-4fdf-a09c-2eba075cce1b" (UID: "d47fe503-70cd-4fdf-a09c-2eba075cce1b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.831704 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "d47fe503-70cd-4fdf-a09c-2eba075cce1b" (UID: "d47fe503-70cd-4fdf-a09c-2eba075cce1b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.832309 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "d47fe503-70cd-4fdf-a09c-2eba075cce1b" (UID: "d47fe503-70cd-4fdf-a09c-2eba075cce1b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.832716 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-scripts" (OuterVolumeSpecName: "scripts") pod "d47fe503-70cd-4fdf-a09c-2eba075cce1b" (UID: "d47fe503-70cd-4fdf-a09c-2eba075cce1b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.832778 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run" (OuterVolumeSpecName: "var-run") pod "d47fe503-70cd-4fdf-a09c-2eba075cce1b" (UID: "d47fe503-70cd-4fdf-a09c-2eba075cce1b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.839396 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47fe503-70cd-4fdf-a09c-2eba075cce1b-kube-api-access-9kdrd" (OuterVolumeSpecName: "kube-api-access-9kdrd") pod "d47fe503-70cd-4fdf-a09c-2eba075cce1b" (UID: "d47fe503-70cd-4fdf-a09c-2eba075cce1b"). InnerVolumeSpecName "kube-api-access-9kdrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.846086 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-config-data\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.849709 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-combined-ca-bundle\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.856598 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5bfc\" (UniqueName: \"kubernetes.io/projected/0f09121d-1a78-4a2a-90ca-90aa15067ebf-kube-api-access-c5bfc\") pod \"keystone-db-sync-ksbl9\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.906530 4970 generic.go:334] "Generic (PLEG): container finished" podID="9227c3d4-a39d-4316-9153-157469a2d006" containerID="80faabc3fd5d18710cefdae7723e0c7061d9dc1450004a3e1ccae13c10a5d8a6" exitCode=0 Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.906683 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" event={"ID":"9227c3d4-a39d-4316-9153-157469a2d006","Type":"ContainerDied","Data":"80faabc3fd5d18710cefdae7723e0c7061d9dc1450004a3e1ccae13c10a5d8a6"} Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.909967 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vtdnt-config-r4p8g" event={"ID":"d47fe503-70cd-4fdf-a09c-2eba075cce1b","Type":"ContainerDied","Data":"0ed23bebd3a9836e67a5787a50c45a932d697b56211d95e632f695ad7808cb0a"} Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.910032 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ed23bebd3a9836e67a5787a50c45a932d697b56211d95e632f695ad7808cb0a" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.910126 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vtdnt-config-r4p8g" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.931890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2g4n9\" (UniqueName: \"kubernetes.io/projected/f26d3109-a89d-4787-899e-a370559a9f42-kube-api-access-2g4n9\") pod \"neutron-db-create-cl74x\" (UID: \"f26d3109-a89d-4787-899e-a370559a9f42\") " pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.932062 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.932081 4970 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/d47fe503-70cd-4fdf-a09c-2eba075cce1b-additional-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.932094 4970 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.932102 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kdrd\" (UniqueName: \"kubernetes.io/projected/d47fe503-70cd-4fdf-a09c-2eba075cce1b-kube-api-access-9kdrd\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.932111 4970 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.932126 4970 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/d47fe503-70cd-4fdf-a09c-2eba075cce1b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:38 crc kubenswrapper[4970]: I0930 10:02:38.957358 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2g4n9\" (UniqueName: \"kubernetes.io/projected/f26d3109-a89d-4787-899e-a370559a9f42-kube-api-access-2g4n9\") pod \"neutron-db-create-cl74x\" (UID: \"f26d3109-a89d-4787-899e-a370559a9f42\") " pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.065382 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.073544 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.075436 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.142012 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-dns-svc\") pod \"9227c3d4-a39d-4316-9153-157469a2d006\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.142232 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-config\") pod \"9227c3d4-a39d-4316-9153-157469a2d006\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.142314 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-sb\") pod \"9227c3d4-a39d-4316-9153-157469a2d006\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.142348 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttlgw\" (UniqueName: \"kubernetes.io/projected/9227c3d4-a39d-4316-9153-157469a2d006-kube-api-access-ttlgw\") pod \"9227c3d4-a39d-4316-9153-157469a2d006\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.142444 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-nb\") pod \"9227c3d4-a39d-4316-9153-157469a2d006\" (UID: \"9227c3d4-a39d-4316-9153-157469a2d006\") " Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.149237 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9227c3d4-a39d-4316-9153-157469a2d006-kube-api-access-ttlgw" (OuterVolumeSpecName: "kube-api-access-ttlgw") pod "9227c3d4-a39d-4316-9153-157469a2d006" (UID: "9227c3d4-a39d-4316-9153-157469a2d006"). InnerVolumeSpecName "kube-api-access-ttlgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.198204 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9227c3d4-a39d-4316-9153-157469a2d006" (UID: "9227c3d4-a39d-4316-9153-157469a2d006"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.231073 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9227c3d4-a39d-4316-9153-157469a2d006" (UID: "9227c3d4-a39d-4316-9153-157469a2d006"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.237810 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-config" (OuterVolumeSpecName: "config") pod "9227c3d4-a39d-4316-9153-157469a2d006" (UID: "9227c3d4-a39d-4316-9153-157469a2d006"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.244256 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.244288 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.244309 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.244319 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttlgw\" (UniqueName: \"kubernetes.io/projected/9227c3d4-a39d-4316-9153-157469a2d006-kube-api-access-ttlgw\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.247762 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9227c3d4-a39d-4316-9153-157469a2d006" (UID: "9227c3d4-a39d-4316-9153-157469a2d006"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.322405 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-mkvnp"] Sep 30 10:02:39 crc kubenswrapper[4970]: W0930 10:02:39.331111 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e97f0e8_a17c_47b6_ae38_ac69404b9b01.slice/crio-52de5de4d2479e30d961adcc9cbd0136c7835c9cda4c8c16f704a2c084101f43 WatchSource:0}: Error finding container 52de5de4d2479e30d961adcc9cbd0136c7835c9cda4c8c16f704a2c084101f43: Status 404 returned error can't find the container with id 52de5de4d2479e30d961adcc9cbd0136c7835c9cda4c8c16f704a2c084101f43 Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.350760 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9227c3d4-a39d-4316-9153-157469a2d006-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.446484 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-df4fk"] Sep 30 10:02:39 crc kubenswrapper[4970]: W0930 10:02:39.455899 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21283459_5e37_4ebb_9f88_4ddfb5b3dc79.slice/crio-43a2c58e2104d8e64099367cb5142393faad20da70c2b0d545729fa1dfa71947 WatchSource:0}: Error finding container 43a2c58e2104d8e64099367cb5142393faad20da70c2b0d545729fa1dfa71947: Status 404 returned error can't find the container with id 43a2c58e2104d8e64099367cb5142393faad20da70c2b0d545729fa1dfa71947 Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.668706 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ksbl9"] Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.786920 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-cl74x"] Sep 30 10:02:39 crc kubenswrapper[4970]: W0930 10:02:39.828350 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf26d3109_a89d_4787_899e_a370559a9f42.slice/crio-edb1f266cb1357399a725ce3d0ab74f82480532fdcbe9166c68392a297e953a8 WatchSource:0}: Error finding container edb1f266cb1357399a725ce3d0ab74f82480532fdcbe9166c68392a297e953a8: Status 404 returned error can't find the container with id edb1f266cb1357399a725ce3d0ab74f82480532fdcbe9166c68392a297e953a8 Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.923792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksbl9" event={"ID":"0f09121d-1a78-4a2a-90ca-90aa15067ebf","Type":"ContainerStarted","Data":"08b67c9eabb2bfabd9d7b12dd1cf352ad6866492d384bcab490728ae09966c47"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.925310 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vtdnt-config-r4p8g"] Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.925622 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s8gsl" event={"ID":"a77c6408-e115-4a90-bba3-2d5c64f2c8a3","Type":"ContainerStarted","Data":"808e41ea9a9e63232613721e4540c3c56be36b18911a2bc8ea8c6f421dcefc8e"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.933007 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-df4fk" event={"ID":"21283459-5e37-4ebb-9f88-4ddfb5b3dc79","Type":"ContainerStarted","Data":"ae40f9f84f1166ef4ce5412b8c3ed40d2d3a7e73b3fddc6858a79cf0e7a8b976"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.933065 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-df4fk" event={"ID":"21283459-5e37-4ebb-9f88-4ddfb5b3dc79","Type":"ContainerStarted","Data":"43a2c58e2104d8e64099367cb5142393faad20da70c2b0d545729fa1dfa71947"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.935093 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" event={"ID":"9227c3d4-a39d-4316-9153-157469a2d006","Type":"ContainerDied","Data":"1d5828b2e963b0427e9b8ecbbb55bef7e57cce5220184dba9261acecef704013"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.935144 4970 scope.go:117] "RemoveContainer" containerID="80faabc3fd5d18710cefdae7723e0c7061d9dc1450004a3e1ccae13c10a5d8a6" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.935277 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ghm66" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.940807 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vtdnt-config-r4p8g"] Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.944032 4970 generic.go:334] "Generic (PLEG): container finished" podID="2e97f0e8-a17c-47b6-ae38-ac69404b9b01" containerID="9c6eccdb0024e6a87f62c0ee478400eefc0e593c775c92f1ee7b80f5ce3e2fc5" exitCode=0 Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.944309 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mkvnp" event={"ID":"2e97f0e8-a17c-47b6-ae38-ac69404b9b01","Type":"ContainerDied","Data":"9c6eccdb0024e6a87f62c0ee478400eefc0e593c775c92f1ee7b80f5ce3e2fc5"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.944423 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mkvnp" event={"ID":"2e97f0e8-a17c-47b6-ae38-ac69404b9b01","Type":"ContainerStarted","Data":"52de5de4d2479e30d961adcc9cbd0136c7835c9cda4c8c16f704a2c084101f43"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.946020 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cl74x" event={"ID":"f26d3109-a89d-4787-899e-a370559a9f42","Type":"ContainerStarted","Data":"edb1f266cb1357399a725ce3d0ab74f82480532fdcbe9166c68392a297e953a8"} Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.959292 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-s8gsl" podStartSLOduration=2.350832386 podStartE2EDuration="14.95926634s" podCreationTimestamp="2025-09-30 10:02:25 +0000 UTC" firstStartedPulling="2025-09-30 10:02:26.233595989 +0000 UTC m=+959.305446923" lastFinishedPulling="2025-09-30 10:02:38.842029953 +0000 UTC m=+971.913880877" observedRunningTime="2025-09-30 10:02:39.945661938 +0000 UTC m=+973.017512872" watchObservedRunningTime="2025-09-30 10:02:39.95926634 +0000 UTC m=+973.031117264" Sep 30 10:02:39 crc kubenswrapper[4970]: I0930 10:02:39.972074 4970 scope.go:117] "RemoveContainer" containerID="19d6092f32f466986ea05755f91995a1779619fd4bd7df08cc2ea1f9d07c7360" Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.032482 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-df4fk" podStartSLOduration=2.032460218 podStartE2EDuration="2.032460218s" podCreationTimestamp="2025-09-30 10:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:40.008333706 +0000 UTC m=+973.080184640" watchObservedRunningTime="2025-09-30 10:02:40.032460218 +0000 UTC m=+973.104311152" Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.038332 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ghm66"] Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.045416 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ghm66"] Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.962972 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cl74x" event={"ID":"f26d3109-a89d-4787-899e-a370559a9f42","Type":"ContainerStarted","Data":"f807d9238ae07d6e88f0cd0258c577a9816b171c5ee5ea4fd697289456c3c3cf"} Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.971742 4970 generic.go:334] "Generic (PLEG): container finished" podID="21283459-5e37-4ebb-9f88-4ddfb5b3dc79" containerID="ae40f9f84f1166ef4ce5412b8c3ed40d2d3a7e73b3fddc6858a79cf0e7a8b976" exitCode=0 Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.971833 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-df4fk" event={"ID":"21283459-5e37-4ebb-9f88-4ddfb5b3dc79","Type":"ContainerDied","Data":"ae40f9f84f1166ef4ce5412b8c3ed40d2d3a7e73b3fddc6858a79cf0e7a8b976"} Sep 30 10:02:40 crc kubenswrapper[4970]: I0930 10:02:40.989262 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-cl74x" podStartSLOduration=2.989236028 podStartE2EDuration="2.989236028s" podCreationTimestamp="2025-09-30 10:02:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:40.978967645 +0000 UTC m=+974.050818589" watchObservedRunningTime="2025-09-30 10:02:40.989236028 +0000 UTC m=+974.061086962" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.329422 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.401323 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7zp8\" (UniqueName: \"kubernetes.io/projected/2e97f0e8-a17c-47b6-ae38-ac69404b9b01-kube-api-access-g7zp8\") pod \"2e97f0e8-a17c-47b6-ae38-ac69404b9b01\" (UID: \"2e97f0e8-a17c-47b6-ae38-ac69404b9b01\") " Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.414296 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e97f0e8-a17c-47b6-ae38-ac69404b9b01-kube-api-access-g7zp8" (OuterVolumeSpecName: "kube-api-access-g7zp8") pod "2e97f0e8-a17c-47b6-ae38-ac69404b9b01" (UID: "2e97f0e8-a17c-47b6-ae38-ac69404b9b01"). InnerVolumeSpecName "kube-api-access-g7zp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.504073 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7zp8\" (UniqueName: \"kubernetes.io/projected/2e97f0e8-a17c-47b6-ae38-ac69404b9b01-kube-api-access-g7zp8\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.682616 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9227c3d4-a39d-4316-9153-157469a2d006" path="/var/lib/kubelet/pods/9227c3d4-a39d-4316-9153-157469a2d006/volumes" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.684363 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47fe503-70cd-4fdf-a09c-2eba075cce1b" path="/var/lib/kubelet/pods/d47fe503-70cd-4fdf-a09c-2eba075cce1b/volumes" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.991780 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-mkvnp" event={"ID":"2e97f0e8-a17c-47b6-ae38-ac69404b9b01","Type":"ContainerDied","Data":"52de5de4d2479e30d961adcc9cbd0136c7835c9cda4c8c16f704a2c084101f43"} Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.991849 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52de5de4d2479e30d961adcc9cbd0136c7835c9cda4c8c16f704a2c084101f43" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.993298 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-mkvnp" Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.993803 4970 generic.go:334] "Generic (PLEG): container finished" podID="f26d3109-a89d-4787-899e-a370559a9f42" containerID="f807d9238ae07d6e88f0cd0258c577a9816b171c5ee5ea4fd697289456c3c3cf" exitCode=0 Sep 30 10:02:41 crc kubenswrapper[4970]: I0930 10:02:41.993881 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cl74x" event={"ID":"f26d3109-a89d-4787-899e-a370559a9f42","Type":"ContainerDied","Data":"f807d9238ae07d6e88f0cd0258c577a9816b171c5ee5ea4fd697289456c3c3cf"} Sep 30 10:02:42 crc kubenswrapper[4970]: I0930 10:02:42.289254 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:42 crc kubenswrapper[4970]: I0930 10:02:42.419817 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hzwc\" (UniqueName: \"kubernetes.io/projected/21283459-5e37-4ebb-9f88-4ddfb5b3dc79-kube-api-access-8hzwc\") pod \"21283459-5e37-4ebb-9f88-4ddfb5b3dc79\" (UID: \"21283459-5e37-4ebb-9f88-4ddfb5b3dc79\") " Sep 30 10:02:42 crc kubenswrapper[4970]: I0930 10:02:42.433373 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21283459-5e37-4ebb-9f88-4ddfb5b3dc79-kube-api-access-8hzwc" (OuterVolumeSpecName: "kube-api-access-8hzwc") pod "21283459-5e37-4ebb-9f88-4ddfb5b3dc79" (UID: "21283459-5e37-4ebb-9f88-4ddfb5b3dc79"). InnerVolumeSpecName "kube-api-access-8hzwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:42 crc kubenswrapper[4970]: I0930 10:02:42.522826 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hzwc\" (UniqueName: \"kubernetes.io/projected/21283459-5e37-4ebb-9f88-4ddfb5b3dc79-kube-api-access-8hzwc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.007099 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-df4fk" event={"ID":"21283459-5e37-4ebb-9f88-4ddfb5b3dc79","Type":"ContainerDied","Data":"43a2c58e2104d8e64099367cb5142393faad20da70c2b0d545729fa1dfa71947"} Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.007505 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43a2c58e2104d8e64099367cb5142393faad20da70c2b0d545729fa1dfa71947" Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.007147 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-df4fk" Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.301103 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.441823 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2g4n9\" (UniqueName: \"kubernetes.io/projected/f26d3109-a89d-4787-899e-a370559a9f42-kube-api-access-2g4n9\") pod \"f26d3109-a89d-4787-899e-a370559a9f42\" (UID: \"f26d3109-a89d-4787-899e-a370559a9f42\") " Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.448767 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f26d3109-a89d-4787-899e-a370559a9f42-kube-api-access-2g4n9" (OuterVolumeSpecName: "kube-api-access-2g4n9") pod "f26d3109-a89d-4787-899e-a370559a9f42" (UID: "f26d3109-a89d-4787-899e-a370559a9f42"). InnerVolumeSpecName "kube-api-access-2g4n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:43 crc kubenswrapper[4970]: I0930 10:02:43.544362 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2g4n9\" (UniqueName: \"kubernetes.io/projected/f26d3109-a89d-4787-899e-a370559a9f42-kube-api-access-2g4n9\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:44 crc kubenswrapper[4970]: I0930 10:02:44.016944 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-cl74x" event={"ID":"f26d3109-a89d-4787-899e-a370559a9f42","Type":"ContainerDied","Data":"edb1f266cb1357399a725ce3d0ab74f82480532fdcbe9166c68392a297e953a8"} Sep 30 10:02:44 crc kubenswrapper[4970]: I0930 10:02:44.017364 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb1f266cb1357399a725ce3d0ab74f82480532fdcbe9166c68392a297e953a8" Sep 30 10:02:44 crc kubenswrapper[4970]: I0930 10:02:44.017011 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-cl74x" Sep 30 10:02:47 crc kubenswrapper[4970]: I0930 10:02:47.052857 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksbl9" event={"ID":"0f09121d-1a78-4a2a-90ca-90aa15067ebf","Type":"ContainerStarted","Data":"3b436dc00ecc5c6e68a4213249dade95ef6f177bfa38415ce1ce9e08a5bc41d0"} Sep 30 10:02:47 crc kubenswrapper[4970]: I0930 10:02:47.072658 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ksbl9" podStartSLOduration=2.453873271 podStartE2EDuration="9.072634858s" podCreationTimestamp="2025-09-30 10:02:38 +0000 UTC" firstStartedPulling="2025-09-30 10:02:39.666099249 +0000 UTC m=+972.737950183" lastFinishedPulling="2025-09-30 10:02:46.284860826 +0000 UTC m=+979.356711770" observedRunningTime="2025-09-30 10:02:47.069454494 +0000 UTC m=+980.141305438" watchObservedRunningTime="2025-09-30 10:02:47.072634858 +0000 UTC m=+980.144485802" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.514604 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-b35f-account-create-bdr9x"] Sep 30 10:02:48 crc kubenswrapper[4970]: E0930 10:02:48.515079 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9227c3d4-a39d-4316-9153-157469a2d006" containerName="dnsmasq-dns" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515099 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9227c3d4-a39d-4316-9153-157469a2d006" containerName="dnsmasq-dns" Sep 30 10:02:48 crc kubenswrapper[4970]: E0930 10:02:48.515117 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21283459-5e37-4ebb-9f88-4ddfb5b3dc79" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515125 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="21283459-5e37-4ebb-9f88-4ddfb5b3dc79" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: E0930 10:02:48.515161 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47fe503-70cd-4fdf-a09c-2eba075cce1b" containerName="ovn-config" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515170 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47fe503-70cd-4fdf-a09c-2eba075cce1b" containerName="ovn-config" Sep 30 10:02:48 crc kubenswrapper[4970]: E0930 10:02:48.515184 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f26d3109-a89d-4787-899e-a370559a9f42" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515190 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f26d3109-a89d-4787-899e-a370559a9f42" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: E0930 10:02:48.515205 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e97f0e8-a17c-47b6-ae38-ac69404b9b01" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515211 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e97f0e8-a17c-47b6-ae38-ac69404b9b01" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: E0930 10:02:48.515232 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9227c3d4-a39d-4316-9153-157469a2d006" containerName="init" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515239 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9227c3d4-a39d-4316-9153-157469a2d006" containerName="init" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515416 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47fe503-70cd-4fdf-a09c-2eba075cce1b" containerName="ovn-config" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515432 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9227c3d4-a39d-4316-9153-157469a2d006" containerName="dnsmasq-dns" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515448 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e97f0e8-a17c-47b6-ae38-ac69404b9b01" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515459 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="21283459-5e37-4ebb-9f88-4ddfb5b3dc79" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.515472 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f26d3109-a89d-4787-899e-a370559a9f42" containerName="mariadb-database-create" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.516092 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.518821 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.531427 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b35f-account-create-bdr9x"] Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.601676 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6710-account-create-qf44n"] Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.602803 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.605007 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.613192 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6710-account-create-qf44n"] Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.641190 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66b99\" (UniqueName: \"kubernetes.io/projected/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1-kube-api-access-66b99\") pod \"cinder-6710-account-create-qf44n\" (UID: \"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1\") " pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.641273 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx866\" (UniqueName: \"kubernetes.io/projected/c3705523-7482-4c42-a7e6-9c2081ea7ce5-kube-api-access-zx866\") pod \"barbican-b35f-account-create-bdr9x\" (UID: \"c3705523-7482-4c42-a7e6-9c2081ea7ce5\") " pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.742863 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66b99\" (UniqueName: \"kubernetes.io/projected/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1-kube-api-access-66b99\") pod \"cinder-6710-account-create-qf44n\" (UID: \"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1\") " pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.743368 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx866\" (UniqueName: \"kubernetes.io/projected/c3705523-7482-4c42-a7e6-9c2081ea7ce5-kube-api-access-zx866\") pod \"barbican-b35f-account-create-bdr9x\" (UID: \"c3705523-7482-4c42-a7e6-9c2081ea7ce5\") " pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.763693 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx866\" (UniqueName: \"kubernetes.io/projected/c3705523-7482-4c42-a7e6-9c2081ea7ce5-kube-api-access-zx866\") pod \"barbican-b35f-account-create-bdr9x\" (UID: \"c3705523-7482-4c42-a7e6-9c2081ea7ce5\") " pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.764935 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66b99\" (UniqueName: \"kubernetes.io/projected/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1-kube-api-access-66b99\") pod \"cinder-6710-account-create-qf44n\" (UID: \"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1\") " pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.862033 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:48 crc kubenswrapper[4970]: I0930 10:02:48.920723 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:49 crc kubenswrapper[4970]: I0930 10:02:49.341175 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-b35f-account-create-bdr9x"] Sep 30 10:02:49 crc kubenswrapper[4970]: W0930 10:02:49.344364 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3705523_7482_4c42_a7e6_9c2081ea7ce5.slice/crio-4fb38a2c19357abf00324367b7ee9bce856e6db558ea024ddd04159a831c54fb WatchSource:0}: Error finding container 4fb38a2c19357abf00324367b7ee9bce856e6db558ea024ddd04159a831c54fb: Status 404 returned error can't find the container with id 4fb38a2c19357abf00324367b7ee9bce856e6db558ea024ddd04159a831c54fb Sep 30 10:02:49 crc kubenswrapper[4970]: I0930 10:02:49.438803 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6710-account-create-qf44n"] Sep 30 10:02:49 crc kubenswrapper[4970]: W0930 10:02:49.454813 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26f06d30_5f1f_43a9_94fd_1eca1e5ee0c1.slice/crio-f394158e705e0e0e80957c090fc405cb60579d547cfc1f20ec21fa82abc2bb1c WatchSource:0}: Error finding container f394158e705e0e0e80957c090fc405cb60579d547cfc1f20ec21fa82abc2bb1c: Status 404 returned error can't find the container with id f394158e705e0e0e80957c090fc405cb60579d547cfc1f20ec21fa82abc2bb1c Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.082238 4970 generic.go:334] "Generic (PLEG): container finished" podID="26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1" containerID="9fba5b9c2a718fe672382c81c7c52482e137a92aa07efbc8e15a28daa24e5276" exitCode=0 Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.082301 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6710-account-create-qf44n" event={"ID":"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1","Type":"ContainerDied","Data":"9fba5b9c2a718fe672382c81c7c52482e137a92aa07efbc8e15a28daa24e5276"} Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.082723 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6710-account-create-qf44n" event={"ID":"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1","Type":"ContainerStarted","Data":"f394158e705e0e0e80957c090fc405cb60579d547cfc1f20ec21fa82abc2bb1c"} Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.084952 4970 generic.go:334] "Generic (PLEG): container finished" podID="c3705523-7482-4c42-a7e6-9c2081ea7ce5" containerID="c6c4bda9e70ba250112ef57d0fd1dae3c1436a2b633eeb1c3bdc3b5a76c999bc" exitCode=0 Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.085021 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b35f-account-create-bdr9x" event={"ID":"c3705523-7482-4c42-a7e6-9c2081ea7ce5","Type":"ContainerDied","Data":"c6c4bda9e70ba250112ef57d0fd1dae3c1436a2b633eeb1c3bdc3b5a76c999bc"} Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.085068 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b35f-account-create-bdr9x" event={"ID":"c3705523-7482-4c42-a7e6-9c2081ea7ce5","Type":"ContainerStarted","Data":"4fb38a2c19357abf00324367b7ee9bce856e6db558ea024ddd04159a831c54fb"} Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.086644 4970 generic.go:334] "Generic (PLEG): container finished" podID="0f09121d-1a78-4a2a-90ca-90aa15067ebf" containerID="3b436dc00ecc5c6e68a4213249dade95ef6f177bfa38415ce1ce9e08a5bc41d0" exitCode=0 Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.086700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksbl9" event={"ID":"0f09121d-1a78-4a2a-90ca-90aa15067ebf","Type":"ContainerDied","Data":"3b436dc00ecc5c6e68a4213249dade95ef6f177bfa38415ce1ce9e08a5bc41d0"} Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.088731 4970 generic.go:334] "Generic (PLEG): container finished" podID="a77c6408-e115-4a90-bba3-2d5c64f2c8a3" containerID="808e41ea9a9e63232613721e4540c3c56be36b18911a2bc8ea8c6f421dcefc8e" exitCode=0 Sep 30 10:02:50 crc kubenswrapper[4970]: I0930 10:02:50.088788 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s8gsl" event={"ID":"a77c6408-e115-4a90-bba3-2d5c64f2c8a3","Type":"ContainerDied","Data":"808e41ea9a9e63232613721e4540c3c56be36b18911a2bc8ea8c6f421dcefc8e"} Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.568410 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.574426 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.588366 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.630449 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66b99\" (UniqueName: \"kubernetes.io/projected/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1-kube-api-access-66b99\") pod \"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1\" (UID: \"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.646386 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1-kube-api-access-66b99" (OuterVolumeSpecName: "kube-api-access-66b99") pod "26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1" (UID: "26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1"). InnerVolumeSpecName "kube-api-access-66b99". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.654138 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.732941 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-combined-ca-bundle\") pod \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733081 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-db-sync-config-data\") pod \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733120 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5bfc\" (UniqueName: \"kubernetes.io/projected/0f09121d-1a78-4a2a-90ca-90aa15067ebf-kube-api-access-c5bfc\") pod \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733174 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-config-data\") pod \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\" (UID: \"0f09121d-1a78-4a2a-90ca-90aa15067ebf\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733199 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-combined-ca-bundle\") pod \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733230 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgwf7\" (UniqueName: \"kubernetes.io/projected/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-kube-api-access-hgwf7\") pod \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733298 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-config-data\") pod \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\" (UID: \"a77c6408-e115-4a90-bba3-2d5c64f2c8a3\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733341 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx866\" (UniqueName: \"kubernetes.io/projected/c3705523-7482-4c42-a7e6-9c2081ea7ce5-kube-api-access-zx866\") pod \"c3705523-7482-4c42-a7e6-9c2081ea7ce5\" (UID: \"c3705523-7482-4c42-a7e6-9c2081ea7ce5\") " Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.733661 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66b99\" (UniqueName: \"kubernetes.io/projected/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1-kube-api-access-66b99\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.738578 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f09121d-1a78-4a2a-90ca-90aa15067ebf-kube-api-access-c5bfc" (OuterVolumeSpecName: "kube-api-access-c5bfc") pod "0f09121d-1a78-4a2a-90ca-90aa15067ebf" (UID: "0f09121d-1a78-4a2a-90ca-90aa15067ebf"). InnerVolumeSpecName "kube-api-access-c5bfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.739258 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3705523-7482-4c42-a7e6-9c2081ea7ce5-kube-api-access-zx866" (OuterVolumeSpecName: "kube-api-access-zx866") pod "c3705523-7482-4c42-a7e6-9c2081ea7ce5" (UID: "c3705523-7482-4c42-a7e6-9c2081ea7ce5"). InnerVolumeSpecName "kube-api-access-zx866". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.740136 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a77c6408-e115-4a90-bba3-2d5c64f2c8a3" (UID: "a77c6408-e115-4a90-bba3-2d5c64f2c8a3"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.740279 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-kube-api-access-hgwf7" (OuterVolumeSpecName: "kube-api-access-hgwf7") pod "a77c6408-e115-4a90-bba3-2d5c64f2c8a3" (UID: "a77c6408-e115-4a90-bba3-2d5c64f2c8a3"). InnerVolumeSpecName "kube-api-access-hgwf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.756632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a77c6408-e115-4a90-bba3-2d5c64f2c8a3" (UID: "a77c6408-e115-4a90-bba3-2d5c64f2c8a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.756646 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f09121d-1a78-4a2a-90ca-90aa15067ebf" (UID: "0f09121d-1a78-4a2a-90ca-90aa15067ebf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.773570 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-config-data" (OuterVolumeSpecName: "config-data") pod "a77c6408-e115-4a90-bba3-2d5c64f2c8a3" (UID: "a77c6408-e115-4a90-bba3-2d5c64f2c8a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.781510 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-config-data" (OuterVolumeSpecName: "config-data") pod "0f09121d-1a78-4a2a-90ca-90aa15067ebf" (UID: "0f09121d-1a78-4a2a-90ca-90aa15067ebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836166 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx866\" (UniqueName: \"kubernetes.io/projected/c3705523-7482-4c42-a7e6-9c2081ea7ce5-kube-api-access-zx866\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836576 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836586 4970 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836595 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5bfc\" (UniqueName: \"kubernetes.io/projected/0f09121d-1a78-4a2a-90ca-90aa15067ebf-kube-api-access-c5bfc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836604 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f09121d-1a78-4a2a-90ca-90aa15067ebf-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836612 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836636 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgwf7\" (UniqueName: \"kubernetes.io/projected/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-kube-api-access-hgwf7\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:51 crc kubenswrapper[4970]: I0930 10:02:51.836647 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a77c6408-e115-4a90-bba3-2d5c64f2c8a3-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.126975 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6710-account-create-qf44n" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.127053 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6710-account-create-qf44n" event={"ID":"26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1","Type":"ContainerDied","Data":"f394158e705e0e0e80957c090fc405cb60579d547cfc1f20ec21fa82abc2bb1c"} Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.127131 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f394158e705e0e0e80957c090fc405cb60579d547cfc1f20ec21fa82abc2bb1c" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.133479 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-b35f-account-create-bdr9x" event={"ID":"c3705523-7482-4c42-a7e6-9c2081ea7ce5","Type":"ContainerDied","Data":"4fb38a2c19357abf00324367b7ee9bce856e6db558ea024ddd04159a831c54fb"} Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.133565 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fb38a2c19357abf00324367b7ee9bce856e6db558ea024ddd04159a831c54fb" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.133803 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-b35f-account-create-bdr9x" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.137855 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ksbl9" event={"ID":"0f09121d-1a78-4a2a-90ca-90aa15067ebf","Type":"ContainerDied","Data":"08b67c9eabb2bfabd9d7b12dd1cf352ad6866492d384bcab490728ae09966c47"} Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.137926 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b67c9eabb2bfabd9d7b12dd1cf352ad6866492d384bcab490728ae09966c47" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.137930 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ksbl9" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.142453 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-s8gsl" event={"ID":"a77c6408-e115-4a90-bba3-2d5c64f2c8a3","Type":"ContainerDied","Data":"551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587"} Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.142501 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.142620 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-s8gsl" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.401242 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kmn2m"] Sep 30 10:02:52 crc kubenswrapper[4970]: E0930 10:02:52.401899 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f09121d-1a78-4a2a-90ca-90aa15067ebf" containerName="keystone-db-sync" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.401928 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f09121d-1a78-4a2a-90ca-90aa15067ebf" containerName="keystone-db-sync" Sep 30 10:02:52 crc kubenswrapper[4970]: E0930 10:02:52.401948 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3705523-7482-4c42-a7e6-9c2081ea7ce5" containerName="mariadb-account-create" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.401957 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3705523-7482-4c42-a7e6-9c2081ea7ce5" containerName="mariadb-account-create" Sep 30 10:02:52 crc kubenswrapper[4970]: E0930 10:02:52.401969 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a77c6408-e115-4a90-bba3-2d5c64f2c8a3" containerName="glance-db-sync" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.401977 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a77c6408-e115-4a90-bba3-2d5c64f2c8a3" containerName="glance-db-sync" Sep 30 10:02:52 crc kubenswrapper[4970]: E0930 10:02:52.402015 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1" containerName="mariadb-account-create" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.402024 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1" containerName="mariadb-account-create" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.402265 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3705523-7482-4c42-a7e6-9c2081ea7ce5" containerName="mariadb-account-create" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.402292 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f09121d-1a78-4a2a-90ca-90aa15067ebf" containerName="keystone-db-sync" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.402315 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1" containerName="mariadb-account-create" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.402329 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a77c6408-e115-4a90-bba3-2d5c64f2c8a3" containerName="glance-db-sync" Sep 30 10:02:52 crc kubenswrapper[4970]: E0930 10:02:52.405420 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3705523_7482_4c42_a7e6_9c2081ea7ce5.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f09121d_1a78_4a2a_90ca_90aa15067ebf.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda77c6408_e115_4a90_bba3_2d5c64f2c8a3.slice/crio-551ada5bb7dd39cb4a8b2b55237336742c5438775d953c7b6bc8279ef38bd587\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda77c6408_e115_4a90_bba3_2d5c64f2c8a3.slice\": RecentStats: unable to find data in memory cache]" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.411453 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.425418 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kmn2m"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.460887 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kkvb8"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.462736 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.470442 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.470682 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.470948 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.471278 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kcbp" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.478791 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kkvb8"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556662 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556718 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-combined-ca-bundle\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556738 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-fernet-keys\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556821 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-scripts\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556842 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzng8\" (UniqueName: \"kubernetes.io/projected/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-kube-api-access-bzng8\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556868 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-config-data\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556945 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp4g\" (UniqueName: \"kubernetes.io/projected/13efae10-c96e-40f3-8f76-7091e463f19d-kube-api-access-qvp4g\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.556981 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.557032 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.557077 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-credential-keys\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.557110 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-config\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.658541 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp4g\" (UniqueName: \"kubernetes.io/projected/13efae10-c96e-40f3-8f76-7091e463f19d-kube-api-access-qvp4g\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659012 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659050 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659100 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-credential-keys\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659126 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-config\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659145 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659161 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659182 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-combined-ca-bundle\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659229 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-fernet-keys\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659249 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-scripts\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659271 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzng8\" (UniqueName: \"kubernetes.io/projected/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-kube-api-access-bzng8\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.659289 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-config-data\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.662111 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.663163 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.663842 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.668402 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6dddd5dd5-pln65"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.672143 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-config\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.672810 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-config-data\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.672965 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.676830 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.678646 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-credential-keys\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.679103 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-scripts\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.682725 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.682849 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-tqlsl" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.682727 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.682810 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.692056 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-fernet-keys\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.692815 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-combined-ca-bundle\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.699464 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dddd5dd5-pln65"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.720948 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp4g\" (UniqueName: \"kubernetes.io/projected/13efae10-c96e-40f3-8f76-7091e463f19d-kube-api-access-qvp4g\") pod \"keystone-bootstrap-kkvb8\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.736158 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzng8\" (UniqueName: \"kubernetes.io/projected/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-kube-api-access-bzng8\") pod \"dnsmasq-dns-6f8c45789f-kmn2m\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.761207 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8mzl\" (UniqueName: \"kubernetes.io/projected/b3014af0-223a-4f93-9f31-4782c041da84-kube-api-access-p8mzl\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.761323 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-config-data\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.761377 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b3014af0-223a-4f93-9f31-4782c041da84-horizon-secret-key\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.761495 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-scripts\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.761521 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3014af0-223a-4f93-9f31-4782c041da84-logs\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.808040 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.816007 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.847517 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kmn2m"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.857207 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-zdzz8"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.858434 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.861645 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.861750 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k9dtb" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.861924 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.862764 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3014af0-223a-4f93-9f31-4782c041da84-logs\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.862830 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8mzl\" (UniqueName: \"kubernetes.io/projected/b3014af0-223a-4f93-9f31-4782c041da84-kube-api-access-p8mzl\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.862870 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-config-data\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.862902 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b3014af0-223a-4f93-9f31-4782c041da84-horizon-secret-key\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.862968 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-scripts\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.863888 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3014af0-223a-4f93-9f31-4782c041da84-logs\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.865222 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-config-data\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.867934 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-scripts\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.868567 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zdzz8"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.869982 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b3014af0-223a-4f93-9f31-4782c041da84-horizon-secret-key\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.875425 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-plfl5"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.877169 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.932436 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8mzl\" (UniqueName: \"kubernetes.io/projected/b3014af0-223a-4f93-9f31-4782c041da84-kube-api-access-p8mzl\") pod \"horizon-6dddd5dd5-pln65\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.936150 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-plfl5"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964321 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964423 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d62c32c-2c57-455b-9d92-6add27e33831-logs\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964463 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-config\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964485 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-config-data\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964544 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2vjz\" (UniqueName: \"kubernetes.io/projected/3d62c32c-2c57-455b-9d92-6add27e33831-kube-api-access-z2vjz\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964578 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964602 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964683 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964710 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-combined-ca-bundle\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964730 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln2dx\" (UniqueName: \"kubernetes.io/projected/d44dda6a-6f30-4e40-86fe-854c1e080f14-kube-api-access-ln2dx\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.964751 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-scripts\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.982158 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:02:52 crc kubenswrapper[4970]: I0930 10:02:52.991283 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:52.997902 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.003358 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.006039 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b5fcb5c4f-mwhb7"] Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.027602 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066328 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066392 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-log-httpd\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066424 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066452 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-run-httpd\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066475 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-scripts\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066500 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-config-data\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066523 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptcn\" (UniqueName: \"kubernetes.io/projected/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-kube-api-access-nptcn\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066569 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066604 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-combined-ca-bundle\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066622 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln2dx\" (UniqueName: \"kubernetes.io/projected/d44dda6a-6f30-4e40-86fe-854c1e080f14-kube-api-access-ln2dx\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066646 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-scripts\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066674 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066708 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d62c32c-2c57-455b-9d92-6add27e33831-logs\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066743 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-config\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066763 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-config-data\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066783 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066803 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2vjz\" (UniqueName: \"kubernetes.io/projected/3d62c32c-2c57-455b-9d92-6add27e33831-kube-api-access-z2vjz\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.066833 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.068485 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-nb\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.068779 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d62c32c-2c57-455b-9d92-6add27e33831-logs\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.069104 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-sb\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.069205 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-svc\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.069656 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-swift-storage-0\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.069935 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-config\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.084825 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-scripts\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.085040 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-config-data\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.092614 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-combined-ca-bundle\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.101364 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2vjz\" (UniqueName: \"kubernetes.io/projected/3d62c32c-2c57-455b-9d92-6add27e33831-kube-api-access-z2vjz\") pod \"placement-db-sync-zdzz8\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.103020 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln2dx\" (UniqueName: \"kubernetes.io/projected/d44dda6a-6f30-4e40-86fe-854c1e080f14-kube-api-access-ln2dx\") pod \"dnsmasq-dns-6c9c9f998c-plfl5\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.127665 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-plfl5"] Sep 30 10:02:53 crc kubenswrapper[4970]: I0930 10:02:53.129691 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.141042 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.165049 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.167184 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169517 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-log-httpd\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169564 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169599 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-config-data\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169624 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-run-httpd\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169648 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-logs\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169669 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-scripts\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169690 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-scripts\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169714 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nptcn\" (UniqueName: \"kubernetes.io/projected/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-kube-api-access-nptcn\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169733 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-config-data\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169763 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-horizon-secret-key\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169851 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.169876 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh6lp\" (UniqueName: \"kubernetes.io/projected/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-kube-api-access-gh6lp\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.170868 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-log-httpd\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.173090 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-nxvpd" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.173390 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.173518 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.178135 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-scripts\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.183157 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-config-data\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.191464 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nptcn\" (UniqueName: \"kubernetes.io/projected/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-kube-api-access-nptcn\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.191541 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.183226 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-run-httpd\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.197316 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.197687 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.221894 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5fcb5c4f-mwhb7"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.235752 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-mj669"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.238085 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.271909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-scripts\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.271999 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-horizon-secret-key\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272057 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272140 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74hz4\" (UniqueName: \"kubernetes.io/projected/a7078d52-8777-418f-870c-4122555e5238-kube-api-access-74hz4\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272180 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272242 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh6lp\" (UniqueName: \"kubernetes.io/projected/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-kube-api-access-gh6lp\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272266 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272282 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-logs\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272314 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272333 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-config-data\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272354 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-logs\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.272423 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zdzz8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.273662 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.274199 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-scripts\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.275157 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-config-data\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.276018 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-logs\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.284775 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-horizon-secret-key\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.289141 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-mj669"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.292184 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh6lp\" (UniqueName: \"kubernetes.io/projected/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-kube-api-access-gh6lp\") pod \"horizon-5b5fcb5c4f-mwhb7\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.342813 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379187 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379263 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379287 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-config\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379313 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379373 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379423 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379450 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74hz4\" (UniqueName: \"kubernetes.io/projected/a7078d52-8777-418f-870c-4122555e5238-kube-api-access-74hz4\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379474 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379499 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379521 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379550 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggvnf\" (UniqueName: \"kubernetes.io/projected/7ca4f52e-812a-41ff-b9b6-0a193d76560d-kube-api-access-ggvnf\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379586 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.379602 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-logs\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.380552 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-logs\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.380839 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.383040 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.389962 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-scripts\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.390523 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.393093 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-config-data\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.404624 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74hz4\" (UniqueName: \"kubernetes.io/projected/a7078d52-8777-418f-870c-4122555e5238-kube-api-access-74hz4\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.406472 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.452070 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.487354 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.487439 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-config\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.487465 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.487537 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.487594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.487656 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggvnf\" (UniqueName: \"kubernetes.io/projected/7ca4f52e-812a-41ff-b9b6-0a193d76560d-kube-api-access-ggvnf\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.488494 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.488720 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-config\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.490711 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.492731 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.493041 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.522549 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.530777 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggvnf\" (UniqueName: \"kubernetes.io/projected/7ca4f52e-812a-41ff-b9b6-0a193d76560d-kube-api-access-ggvnf\") pod \"dnsmasq-dns-57c957c4ff-mj669\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.570742 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.847089 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-2n6k8"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.862958 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.869066 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x9x8z" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.869441 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.870582 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2n6k8"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.904068 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4pw5n"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.905438 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.909629 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.909743 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bfskb" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.917935 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:53.969356 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4pw5n"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004426 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8xnm\" (UniqueName: \"kubernetes.io/projected/6ea9f861-9877-480f-a490-08c80d2580cf-kube-api-access-x8xnm\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004509 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bmz\" (UniqueName: \"kubernetes.io/projected/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-kube-api-access-q8bmz\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004637 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-combined-ca-bundle\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004704 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-combined-ca-bundle\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004752 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-config-data\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004818 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-db-sync-config-data\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004895 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-scripts\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.004941 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-db-sync-config-data\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.005129 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-etc-machine-id\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106638 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8xnm\" (UniqueName: \"kubernetes.io/projected/6ea9f861-9877-480f-a490-08c80d2580cf-kube-api-access-x8xnm\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106697 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bmz\" (UniqueName: \"kubernetes.io/projected/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-kube-api-access-q8bmz\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106765 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-combined-ca-bundle\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106797 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-combined-ca-bundle\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106827 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-config-data\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106852 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-db-sync-config-data\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106889 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-scripts\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106904 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-db-sync-config-data\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.106936 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-etc-machine-id\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.107031 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-etc-machine-id\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.120557 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-db-sync-config-data\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.122477 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-combined-ca-bundle\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.125511 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-scripts\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.126485 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-config-data\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.126677 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-combined-ca-bundle\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.134883 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-db-sync-config-data\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.138483 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8xnm\" (UniqueName: \"kubernetes.io/projected/6ea9f861-9877-480f-a490-08c80d2580cf-kube-api-access-x8xnm\") pod \"barbican-db-sync-2n6k8\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.140699 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bmz\" (UniqueName: \"kubernetes.io/projected/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-kube-api-access-q8bmz\") pod \"cinder-db-sync-4pw5n\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.166134 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.167796 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.170344 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.191769 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.195472 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.242973 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312278 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312310 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312327 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-logs\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312343 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmcb2\" (UniqueName: \"kubernetes.io/projected/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-kube-api-access-xmcb2\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312380 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.312417 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.414207 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.414410 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.414723 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.414807 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.414896 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.414961 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-logs\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.415062 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmcb2\" (UniqueName: \"kubernetes.io/projected/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-kube-api-access-xmcb2\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.415769 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.416539 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-logs\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.416799 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.419453 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-scripts\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.422283 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.423681 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-config-data\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.436453 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmcb2\" (UniqueName: \"kubernetes.io/projected/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-kube-api-access-xmcb2\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.453184 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.465517 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kkvb8"] Sep 30 10:02:54 crc kubenswrapper[4970]: W0930 10:02:54.481179 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13efae10_c96e_40f3_8f76_7091e463f19d.slice/crio-94f83567a1981e2f5020f7e41da2b87eafb07f7df5e7aff1eec095ef23ebe441 WatchSource:0}: Error finding container 94f83567a1981e2f5020f7e41da2b87eafb07f7df5e7aff1eec095ef23ebe441: Status 404 returned error can't find the container with id 94f83567a1981e2f5020f7e41da2b87eafb07f7df5e7aff1eec095ef23ebe441 Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.518437 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:02:54 crc kubenswrapper[4970]: I0930 10:02:54.992361 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b5fcb5c4f-mwhb7"] Sep 30 10:02:54 crc kubenswrapper[4970]: W0930 10:02:54.998205 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ca4f52e_812a_41ff_b9b6_0a193d76560d.slice/crio-0ead9b7e6a477225c1391505e74a3dee3961c672d4e6c4e183b52878caf48e6d WatchSource:0}: Error finding container 0ead9b7e6a477225c1391505e74a3dee3961c672d4e6c4e183b52878caf48e6d: Status 404 returned error can't find the container with id 0ead9b7e6a477225c1391505e74a3dee3961c672d4e6c4e183b52878caf48e6d Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.011381 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-mj669"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.033329 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-zdzz8"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.064941 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:02:55 crc kubenswrapper[4970]: W0930 10:02:55.075418 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ea9f861_9877_480f_a490_08c80d2580cf.slice/crio-cd7ee2cbae010a65b1fe5f2b352ed4c8bf880428b6539b2cf191555040ac056f WatchSource:0}: Error finding container cd7ee2cbae010a65b1fe5f2b352ed4c8bf880428b6539b2cf191555040ac056f: Status 404 returned error can't find the container with id cd7ee2cbae010a65b1fe5f2b352ed4c8bf880428b6539b2cf191555040ac056f Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.084202 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6dddd5dd5-pln65"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.095914 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-plfl5"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.112054 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4pw5n"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.124981 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kmn2m"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.134014 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-2n6k8"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.148096 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.230308 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2n6k8" event={"ID":"6ea9f861-9877-480f-a490-08c80d2580cf","Type":"ContainerStarted","Data":"cd7ee2cbae010a65b1fe5f2b352ed4c8bf880428b6539b2cf191555040ac056f"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.231640 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" event={"ID":"7ca4f52e-812a-41ff-b9b6-0a193d76560d","Type":"ContainerStarted","Data":"0ead9b7e6a477225c1391505e74a3dee3961c672d4e6c4e183b52878caf48e6d"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.234596 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5fcb5c4f-mwhb7" event={"ID":"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454","Type":"ContainerStarted","Data":"27cbc97964b720c4f162d41b4b4336d3dd976ae4825d4b98c8c4bb939d228b74"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.250638 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" event={"ID":"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3","Type":"ContainerStarted","Data":"99dd75be154e5c37d43c0672a878c2c8ba3071fb174ade84a1a6c56c2f6a5cfd"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.252476 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dddd5dd5-pln65" event={"ID":"b3014af0-223a-4f93-9f31-4782c041da84","Type":"ContainerStarted","Data":"fcbcfe76070a9fb2058bab1a2641dabd40170841319c089f6973805be11e00c8"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.253624 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zdzz8" event={"ID":"3d62c32c-2c57-455b-9d92-6add27e33831","Type":"ContainerStarted","Data":"0ba39ae898928cc0350ca162e28e95c246e662133930452493ee14ddae5703f9"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.256532 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerStarted","Data":"3cae59d521c01fc9d137e9bb113d0d152f0fe03bcddd89a847462b330831e9bb"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.259953 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" event={"ID":"d44dda6a-6f30-4e40-86fe-854c1e080f14","Type":"ContainerStarted","Data":"7857be1f57932fd3d32c186c401c912d5eb9a43d8669d83e20bdad0500225a7f"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.261003 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7078d52-8777-418f-870c-4122555e5238","Type":"ContainerStarted","Data":"b3ee7b5092a644ece9644389c5f0080ce88ac1469a28066d51f9d43ddf38df74"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.262596 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkvb8" event={"ID":"13efae10-c96e-40f3-8f76-7091e463f19d","Type":"ContainerStarted","Data":"bbbcbafe272e447b4084d9993c023ef5f4944523ec900f99923215acfd440951"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.262629 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkvb8" event={"ID":"13efae10-c96e-40f3-8f76-7091e463f19d","Type":"ContainerStarted","Data":"94f83567a1981e2f5020f7e41da2b87eafb07f7df5e7aff1eec095ef23ebe441"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.264922 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4pw5n" event={"ID":"ad24190f-4eb6-49c8-bad6-c33a817cd9c6","Type":"ContainerStarted","Data":"45a846fdd5d2c8a9ea0456d0cb9a00374260e6505689729e4384f5abc1a9f1ae"} Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.287830 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.308067 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kkvb8" podStartSLOduration=3.307981174 podStartE2EDuration="3.307981174s" podCreationTimestamp="2025-09-30 10:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:55.293580121 +0000 UTC m=+988.365431055" watchObservedRunningTime="2025-09-30 10:02:55.307981174 +0000 UTC m=+988.379832098" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.698822 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.725600 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5fcb5c4f-mwhb7"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.744833 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.776334 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.801058 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-79bc49b8b5-jrq6x"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.803065 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.809101 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79bc49b8b5-jrq6x"] Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.878348 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-logs\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.878461 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-scripts\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.878488 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-horizon-secret-key\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.878526 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8p8k\" (UniqueName: \"kubernetes.io/projected/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-kube-api-access-n8p8k\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.878639 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-config-data\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.980806 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-logs\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.980923 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-scripts\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.980953 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-horizon-secret-key\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.980993 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8p8k\" (UniqueName: \"kubernetes.io/projected/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-kube-api-access-n8p8k\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.981140 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-config-data\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.981271 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-logs\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.983055 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-config-data\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.984003 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-scripts\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:55 crc kubenswrapper[4970]: I0930 10:02:55.997794 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-horizon-secret-key\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.000693 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8p8k\" (UniqueName: \"kubernetes.io/projected/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-kube-api-access-n8p8k\") pod \"horizon-79bc49b8b5-jrq6x\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.132297 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.293022 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345","Type":"ContainerStarted","Data":"175d8d374692f22f9686d527a1b0369f1b6ddb7cec731d1907c7ba9c54d973b0"} Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.298903 4970 generic.go:334] "Generic (PLEG): container finished" podID="d44dda6a-6f30-4e40-86fe-854c1e080f14" containerID="b14220162f3bf9d50ec31d627e3ca1c40d33103b34015bc4d2e193f913ea5fa4" exitCode=0 Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.299000 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" event={"ID":"d44dda6a-6f30-4e40-86fe-854c1e080f14","Type":"ContainerDied","Data":"b14220162f3bf9d50ec31d627e3ca1c40d33103b34015bc4d2e193f913ea5fa4"} Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.324771 4970 generic.go:334] "Generic (PLEG): container finished" podID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerID="399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b" exitCode=0 Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.324905 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" event={"ID":"7ca4f52e-812a-41ff-b9b6-0a193d76560d","Type":"ContainerDied","Data":"399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b"} Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.368754 4970 generic.go:334] "Generic (PLEG): container finished" podID="003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" containerID="813184bbdf52955c6819d4ba3f92d9d2dafa0c666e075e07b59b4947808d4033" exitCode=0 Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.368983 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" event={"ID":"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3","Type":"ContainerDied","Data":"813184bbdf52955c6819d4ba3f92d9d2dafa0c666e075e07b59b4947808d4033"} Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.815221 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.932885 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-nb\") pod \"d44dda6a-6f30-4e40-86fe-854c1e080f14\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.933525 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-sb\") pod \"d44dda6a-6f30-4e40-86fe-854c1e080f14\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.933599 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln2dx\" (UniqueName: \"kubernetes.io/projected/d44dda6a-6f30-4e40-86fe-854c1e080f14-kube-api-access-ln2dx\") pod \"d44dda6a-6f30-4e40-86fe-854c1e080f14\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.933699 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-svc\") pod \"d44dda6a-6f30-4e40-86fe-854c1e080f14\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.933778 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-swift-storage-0\") pod \"d44dda6a-6f30-4e40-86fe-854c1e080f14\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.933943 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-config\") pod \"d44dda6a-6f30-4e40-86fe-854c1e080f14\" (UID: \"d44dda6a-6f30-4e40-86fe-854c1e080f14\") " Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.949836 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-79bc49b8b5-jrq6x"] Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.958580 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:56 crc kubenswrapper[4970]: I0930 10:02:56.984225 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d44dda6a-6f30-4e40-86fe-854c1e080f14-kube-api-access-ln2dx" (OuterVolumeSpecName: "kube-api-access-ln2dx") pod "d44dda6a-6f30-4e40-86fe-854c1e080f14" (UID: "d44dda6a-6f30-4e40-86fe-854c1e080f14"). InnerVolumeSpecName "kube-api-access-ln2dx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: W0930 10:02:57.008136 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b6077a3_21ee_4b10_8f3b_dfefdad1c51c.slice/crio-4768c12b6baa4e168391025a7ab58d049b287fc11c3348a50987dece9f8d5641 WatchSource:0}: Error finding container 4768c12b6baa4e168391025a7ab58d049b287fc11c3348a50987dece9f8d5641: Status 404 returned error can't find the container with id 4768c12b6baa4e168391025a7ab58d049b287fc11c3348a50987dece9f8d5641 Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.035492 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzng8\" (UniqueName: \"kubernetes.io/projected/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-kube-api-access-bzng8\") pod \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.035598 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-swift-storage-0\") pod \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.035686 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-svc\") pod \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.035780 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-nb\") pod \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.035983 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-sb\") pod \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.036064 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-config\") pod \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\" (UID: \"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3\") " Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.036473 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln2dx\" (UniqueName: \"kubernetes.io/projected/d44dda6a-6f30-4e40-86fe-854c1e080f14-kube-api-access-ln2dx\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.037838 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-config" (OuterVolumeSpecName: "config") pod "d44dda6a-6f30-4e40-86fe-854c1e080f14" (UID: "d44dda6a-6f30-4e40-86fe-854c1e080f14"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.060164 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-kube-api-access-bzng8" (OuterVolumeSpecName: "kube-api-access-bzng8") pod "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" (UID: "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3"). InnerVolumeSpecName "kube-api-access-bzng8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.124234 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" (UID: "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.126338 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" (UID: "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.132076 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d44dda6a-6f30-4e40-86fe-854c1e080f14" (UID: "d44dda6a-6f30-4e40-86fe-854c1e080f14"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.132362 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" (UID: "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.133032 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d44dda6a-6f30-4e40-86fe-854c1e080f14" (UID: "d44dda6a-6f30-4e40-86fe-854c1e080f14"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138894 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzng8\" (UniqueName: \"kubernetes.io/projected/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-kube-api-access-bzng8\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138933 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138945 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138956 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138968 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138979 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.138990 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.147419 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" (UID: "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.147764 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d44dda6a-6f30-4e40-86fe-854c1e080f14" (UID: "d44dda6a-6f30-4e40-86fe-854c1e080f14"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.157518 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d44dda6a-6f30-4e40-86fe-854c1e080f14" (UID: "d44dda6a-6f30-4e40-86fe-854c1e080f14"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.164850 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-config" (OuterVolumeSpecName: "config") pod "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" (UID: "003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.240915 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.240963 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.240981 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.240997 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d44dda6a-6f30-4e40-86fe-854c1e080f14-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.398696 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345","Type":"ContainerStarted","Data":"72a546b580bc4dc4875cc65e439184d75c7ee8a23194e736b52e15fa66fb1e86"} Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.404378 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79bc49b8b5-jrq6x" event={"ID":"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c","Type":"ContainerStarted","Data":"4768c12b6baa4e168391025a7ab58d049b287fc11c3348a50987dece9f8d5641"} Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.407891 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" event={"ID":"d44dda6a-6f30-4e40-86fe-854c1e080f14","Type":"ContainerDied","Data":"7857be1f57932fd3d32c186c401c912d5eb9a43d8669d83e20bdad0500225a7f"} Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.408023 4970 scope.go:117] "RemoveContainer" containerID="b14220162f3bf9d50ec31d627e3ca1c40d33103b34015bc4d2e193f913ea5fa4" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.408020 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c9c9f998c-plfl5" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.414569 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" event={"ID":"7ca4f52e-812a-41ff-b9b6-0a193d76560d","Type":"ContainerStarted","Data":"8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4"} Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.414811 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.416945 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7078d52-8777-418f-870c-4122555e5238","Type":"ContainerStarted","Data":"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1"} Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.422303 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" event={"ID":"003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3","Type":"ContainerDied","Data":"99dd75be154e5c37d43c0672a878c2c8ba3071fb174ade84a1a6c56c2f6a5cfd"} Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.422398 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-kmn2m" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.455350 4970 scope.go:117] "RemoveContainer" containerID="813184bbdf52955c6819d4ba3f92d9d2dafa0c666e075e07b59b4947808d4033" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.461993 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" podStartSLOduration=5.461959571 podStartE2EDuration="5.461959571s" podCreationTimestamp="2025-09-30 10:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:57.456347542 +0000 UTC m=+990.528198476" watchObservedRunningTime="2025-09-30 10:02:57.461959571 +0000 UTC m=+990.533810505" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.510787 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-plfl5"] Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.523109 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c9c9f998c-plfl5"] Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.628481 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kmn2m"] Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.638584 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-kmn2m"] Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.716504 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" path="/var/lib/kubelet/pods/003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3/volumes" Sep 30 10:02:57 crc kubenswrapper[4970]: I0930 10:02:57.717153 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d44dda6a-6f30-4e40-86fe-854c1e080f14" path="/var/lib/kubelet/pods/d44dda6a-6f30-4e40-86fe-854c1e080f14/volumes" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.441146 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7078d52-8777-418f-870c-4122555e5238","Type":"ContainerStarted","Data":"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea"} Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.441317 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-log" containerID="cri-o://0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1" gracePeriod=30 Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.441401 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-httpd" containerID="cri-o://df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea" gracePeriod=30 Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.478844 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.47881932 podStartE2EDuration="6.47881932s" podCreationTimestamp="2025-09-30 10:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:58.467406876 +0000 UTC m=+991.539257820" watchObservedRunningTime="2025-09-30 10:02:58.47881932 +0000 UTC m=+991.550670254" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.751746 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d622-account-create-7hnv9"] Sep 30 10:02:58 crc kubenswrapper[4970]: E0930 10:02:58.752235 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" containerName="init" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.752255 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" containerName="init" Sep 30 10:02:58 crc kubenswrapper[4970]: E0930 10:02:58.752271 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d44dda6a-6f30-4e40-86fe-854c1e080f14" containerName="init" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.752278 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d44dda6a-6f30-4e40-86fe-854c1e080f14" containerName="init" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.752459 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d44dda6a-6f30-4e40-86fe-854c1e080f14" containerName="init" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.752476 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="003d84ae-3cfe-4d50-ba2a-cf4e11e0f1d3" containerName="init" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.753133 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.759034 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.768617 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d622-account-create-7hnv9"] Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.788896 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpgwb\" (UniqueName: \"kubernetes.io/projected/2cbb3863-af8c-4cba-9557-a02d280dd1c7-kube-api-access-bpgwb\") pod \"neutron-d622-account-create-7hnv9\" (UID: \"2cbb3863-af8c-4cba-9557-a02d280dd1c7\") " pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.891813 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpgwb\" (UniqueName: \"kubernetes.io/projected/2cbb3863-af8c-4cba-9557-a02d280dd1c7-kube-api-access-bpgwb\") pod \"neutron-d622-account-create-7hnv9\" (UID: \"2cbb3863-af8c-4cba-9557-a02d280dd1c7\") " pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:02:58 crc kubenswrapper[4970]: I0930 10:02:58.932841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpgwb\" (UniqueName: \"kubernetes.io/projected/2cbb3863-af8c-4cba-9557-a02d280dd1c7-kube-api-access-bpgwb\") pod \"neutron-d622-account-create-7hnv9\" (UID: \"2cbb3863-af8c-4cba-9557-a02d280dd1c7\") " pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.083301 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.300659 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404267 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-scripts\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404388 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-config-data\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404411 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404496 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-logs\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404531 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-httpd-run\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404572 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-combined-ca-bundle\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.404637 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74hz4\" (UniqueName: \"kubernetes.io/projected/a7078d52-8777-418f-870c-4122555e5238-kube-api-access-74hz4\") pod \"a7078d52-8777-418f-870c-4122555e5238\" (UID: \"a7078d52-8777-418f-870c-4122555e5238\") " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.406340 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.406761 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-logs" (OuterVolumeSpecName: "logs") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.413543 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-scripts" (OuterVolumeSpecName: "scripts") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.413934 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7078d52-8777-418f-870c-4122555e5238-kube-api-access-74hz4" (OuterVolumeSpecName: "kube-api-access-74hz4") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "kube-api-access-74hz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.419834 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.459192 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.498962 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345","Type":"ContainerStarted","Data":"9afbd82f69258f2c8c059e75f0bcbbf0e9aaa737bb6873ed0a1336cff9d0e679"} Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.499195 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-log" containerID="cri-o://72a546b580bc4dc4875cc65e439184d75c7ee8a23194e736b52e15fa66fb1e86" gracePeriod=30 Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.499468 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-httpd" containerID="cri-o://9afbd82f69258f2c8c059e75f0bcbbf0e9aaa737bb6873ed0a1336cff9d0e679" gracePeriod=30 Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.500567 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-config-data" (OuterVolumeSpecName: "config-data") pod "a7078d52-8777-418f-870c-4122555e5238" (UID: "a7078d52-8777-418f-870c-4122555e5238"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.507936 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.507972 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.508030 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.508045 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.508057 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a7078d52-8777-418f-870c-4122555e5238-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.508069 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7078d52-8777-418f-870c-4122555e5238-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.508154 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74hz4\" (UniqueName: \"kubernetes.io/projected/a7078d52-8777-418f-870c-4122555e5238-kube-api-access-74hz4\") on node \"crc\" DevicePath \"\"" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514045 4970 generic.go:334] "Generic (PLEG): container finished" podID="a7078d52-8777-418f-870c-4122555e5238" containerID="df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea" exitCode=0 Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514466 4970 generic.go:334] "Generic (PLEG): container finished" podID="a7078d52-8777-418f-870c-4122555e5238" containerID="0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1" exitCode=143 Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514156 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7078d52-8777-418f-870c-4122555e5238","Type":"ContainerDied","Data":"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea"} Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514565 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7078d52-8777-418f-870c-4122555e5238","Type":"ContainerDied","Data":"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1"} Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514295 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514640 4970 scope.go:117] "RemoveContainer" containerID="df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.514622 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a7078d52-8777-418f-870c-4122555e5238","Type":"ContainerDied","Data":"b3ee7b5092a644ece9644389c5f0080ce88ac1469a28066d51f9d43ddf38df74"} Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.536106 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.536075724 podStartE2EDuration="6.536075724s" podCreationTimestamp="2025-09-30 10:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:02:59.517286104 +0000 UTC m=+992.589137058" watchObservedRunningTime="2025-09-30 10:02:59.536075724 +0000 UTC m=+992.607926678" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.603849 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.609467 4970 scope.go:117] "RemoveContainer" containerID="0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.611116 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.650046 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.665159 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:59 crc kubenswrapper[4970]: E0930 10:02:59.665784 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-httpd" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.665804 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-httpd" Sep 30 10:02:59 crc kubenswrapper[4970]: E0930 10:02:59.665836 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-log" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.665845 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-log" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.666089 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-log" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.666124 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7078d52-8777-418f-870c-4122555e5238" containerName="glance-httpd" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.667349 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.668940 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.685514 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.713565 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.713864 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wttmj\" (UniqueName: \"kubernetes.io/projected/d890158e-f620-412b-ad4f-11437ded0689-kube-api-access-wttmj\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.713931 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.714256 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-logs\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.714413 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-scripts\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.714481 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-config-data\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.714543 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.714838 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.745529 4970 scope.go:117] "RemoveContainer" containerID="df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.746396 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7078d52-8777-418f-870c-4122555e5238" path="/var/lib/kubelet/pods/a7078d52-8777-418f-870c-4122555e5238/volumes" Sep 30 10:02:59 crc kubenswrapper[4970]: E0930 10:02:59.748860 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea\": container with ID starting with df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea not found: ID does not exist" containerID="df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.748902 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea"} err="failed to get container status \"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea\": rpc error: code = NotFound desc = could not find container \"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea\": container with ID starting with df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea not found: ID does not exist" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.748930 4970 scope.go:117] "RemoveContainer" containerID="0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.749602 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d622-account-create-7hnv9"] Sep 30 10:02:59 crc kubenswrapper[4970]: E0930 10:02:59.753134 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1\": container with ID starting with 0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1 not found: ID does not exist" containerID="0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.753166 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1"} err="failed to get container status \"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1\": rpc error: code = NotFound desc = could not find container \"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1\": container with ID starting with 0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1 not found: ID does not exist" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.753187 4970 scope.go:117] "RemoveContainer" containerID="df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.753470 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea"} err="failed to get container status \"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea\": rpc error: code = NotFound desc = could not find container \"df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea\": container with ID starting with df6e8c1028a6ce9bbb44991b2b04e031974b7d9d452267ab4d47685f13157eea not found: ID does not exist" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.753532 4970 scope.go:117] "RemoveContainer" containerID="0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.755251 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1"} err="failed to get container status \"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1\": rpc error: code = NotFound desc = could not find container \"0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1\": container with ID starting with 0b54afd05c8c2c9062cfb8b9050f035980ad2890e4933b0423500cf15ed689d1 not found: ID does not exist" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.799656 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.816309 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wttmj\" (UniqueName: \"kubernetes.io/projected/d890158e-f620-412b-ad4f-11437ded0689-kube-api-access-wttmj\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.816385 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.816417 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-logs\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.816467 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-scripts\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.816514 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-config-data\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.816569 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.817127 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.817549 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-logs\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.823105 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-scripts\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.824947 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-config-data\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.828413 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:02:59 crc kubenswrapper[4970]: I0930 10:02:59.833985 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wttmj\" (UniqueName: \"kubernetes.io/projected/d890158e-f620-412b-ad4f-11437ded0689-kube-api-access-wttmj\") pod \"glance-default-external-api-0\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.075811 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.555188 4970 generic.go:334] "Generic (PLEG): container finished" podID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerID="9afbd82f69258f2c8c059e75f0bcbbf0e9aaa737bb6873ed0a1336cff9d0e679" exitCode=0 Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.555228 4970 generic.go:334] "Generic (PLEG): container finished" podID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerID="72a546b580bc4dc4875cc65e439184d75c7ee8a23194e736b52e15fa66fb1e86" exitCode=143 Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.555283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345","Type":"ContainerDied","Data":"9afbd82f69258f2c8c059e75f0bcbbf0e9aaa737bb6873ed0a1336cff9d0e679"} Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.555314 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345","Type":"ContainerDied","Data":"72a546b580bc4dc4875cc65e439184d75c7ee8a23194e736b52e15fa66fb1e86"} Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.588153 4970 generic.go:334] "Generic (PLEG): container finished" podID="13efae10-c96e-40f3-8f76-7091e463f19d" containerID="bbbcbafe272e447b4084d9993c023ef5f4944523ec900f99923215acfd440951" exitCode=0 Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.588291 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkvb8" event={"ID":"13efae10-c96e-40f3-8f76-7091e463f19d","Type":"ContainerDied","Data":"bbbcbafe272e447b4084d9993c023ef5f4944523ec900f99923215acfd440951"} Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.607034 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d622-account-create-7hnv9" event={"ID":"2cbb3863-af8c-4cba-9557-a02d280dd1c7","Type":"ContainerStarted","Data":"ceec66bdca4f542755ce01eb6dfa46d67568c2df50429756a62e0b39a710e5cd"} Sep 30 10:03:00 crc kubenswrapper[4970]: I0930 10:03:00.777823 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:00 crc kubenswrapper[4970]: W0930 10:03:00.831159 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d WatchSource:0}: Error finding container 660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d: Status 404 returned error can't find the container with id 660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d Sep 30 10:03:01 crc kubenswrapper[4970]: I0930 10:03:01.632155 4970 generic.go:334] "Generic (PLEG): container finished" podID="2cbb3863-af8c-4cba-9557-a02d280dd1c7" containerID="e0d7235bb5495f77135975159a7f12f227ac7f50760380a0f9ed5bc15016aa17" exitCode=0 Sep 30 10:03:01 crc kubenswrapper[4970]: I0930 10:03:01.632417 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d622-account-create-7hnv9" event={"ID":"2cbb3863-af8c-4cba-9557-a02d280dd1c7","Type":"ContainerDied","Data":"e0d7235bb5495f77135975159a7f12f227ac7f50760380a0f9ed5bc15016aa17"} Sep 30 10:03:01 crc kubenswrapper[4970]: I0930 10:03:01.638477 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d890158e-f620-412b-ad4f-11437ded0689","Type":"ContainerStarted","Data":"660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d"} Sep 30 10:03:02 crc kubenswrapper[4970]: I0930 10:03:02.650351 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d890158e-f620-412b-ad4f-11437ded0689","Type":"ContainerStarted","Data":"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60"} Sep 30 10:03:03 crc kubenswrapper[4970]: I0930 10:03:03.572197 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:03:03 crc kubenswrapper[4970]: I0930 10:03:03.644961 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-x6dpl"] Sep 30 10:03:03 crc kubenswrapper[4970]: I0930 10:03:03.645332 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" containerID="cri-o://1e25e16d357f4992d1d3716ee20aeb8544e0462676b347425ccf853363c5e9d7" gracePeriod=10 Sep 30 10:03:04 crc kubenswrapper[4970]: I0930 10:03:04.691059 4970 generic.go:334] "Generic (PLEG): container finished" podID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerID="1e25e16d357f4992d1d3716ee20aeb8544e0462676b347425ccf853363c5e9d7" exitCode=0 Sep 30 10:03:04 crc kubenswrapper[4970]: I0930 10:03:04.691248 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" event={"ID":"bf98d493-f44f-4cd9-ab10-4a5a132ce94f","Type":"ContainerDied","Data":"1e25e16d357f4992d1d3716ee20aeb8544e0462676b347425ccf853363c5e9d7"} Sep 30 10:03:04 crc kubenswrapper[4970]: I0930 10:03:04.821357 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:03:04 crc kubenswrapper[4970]: I0930 10:03:04.821483 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.472222 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.820309 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dddd5dd5-pln65"] Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.856431 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f6674bb4b-gp2wp"] Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.859619 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.865375 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.870889 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6674bb4b-gp2wp"] Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906450 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-secret-key\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906526 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-config-data\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906609 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-scripts\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906666 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48bd5d72-868c-4d81-9d1f-6a03ba997169-logs\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906717 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-tls-certs\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906798 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqn89\" (UniqueName: \"kubernetes.io/projected/48bd5d72-868c-4d81-9d1f-6a03ba997169-kube-api-access-fqn89\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.906866 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-combined-ca-bundle\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.940375 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79bc49b8b5-jrq6x"] Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.974180 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-8fbb4f9c8-n8t5n"] Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.975823 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:06 crc kubenswrapper[4970]: I0930 10:03:06.993623 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8fbb4f9c8-n8t5n"] Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008236 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-scripts\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008302 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48bd5d72-868c-4d81-9d1f-6a03ba997169-logs\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008336 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-tls-certs\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008386 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqn89\" (UniqueName: \"kubernetes.io/projected/48bd5d72-868c-4d81-9d1f-6a03ba997169-kube-api-access-fqn89\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008424 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-combined-ca-bundle\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008470 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-secret-key\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008492 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-config-data\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.008822 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48bd5d72-868c-4d81-9d1f-6a03ba997169-logs\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.009961 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-scripts\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.014044 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-config-data\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.017938 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-tls-certs\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.033518 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-secret-key\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.037944 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqn89\" (UniqueName: \"kubernetes.io/projected/48bd5d72-868c-4d81-9d1f-6a03ba997169-kube-api-access-fqn89\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.047891 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-combined-ca-bundle\") pod \"horizon-6f6674bb4b-gp2wp\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110205 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-combined-ca-bundle\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110266 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b479413-73f2-4159-8ec6-5e23f139c53c-logs\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110306 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-horizon-secret-key\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110338 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b479413-73f2-4159-8ec6-5e23f139c53c-config-data\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110370 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-horizon-tls-certs\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110479 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b479413-73f2-4159-8ec6-5e23f139c53c-scripts\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.110584 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvlq4\" (UniqueName: \"kubernetes.io/projected/8b479413-73f2-4159-8ec6-5e23f139c53c-kube-api-access-tvlq4\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213236 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b479413-73f2-4159-8ec6-5e23f139c53c-config-data\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213295 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-horizon-tls-certs\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213342 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b479413-73f2-4159-8ec6-5e23f139c53c-scripts\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213374 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvlq4\" (UniqueName: \"kubernetes.io/projected/8b479413-73f2-4159-8ec6-5e23f139c53c-kube-api-access-tvlq4\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213482 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-combined-ca-bundle\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213507 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b479413-73f2-4159-8ec6-5e23f139c53c-logs\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.213538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-horizon-secret-key\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.214324 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b479413-73f2-4159-8ec6-5e23f139c53c-scripts\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.214601 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b479413-73f2-4159-8ec6-5e23f139c53c-logs\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.214758 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8b479413-73f2-4159-8ec6-5e23f139c53c-config-data\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.217140 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-horizon-tls-certs\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.218402 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-combined-ca-bundle\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.218759 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8b479413-73f2-4159-8ec6-5e23f139c53c-horizon-secret-key\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.233215 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.240948 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvlq4\" (UniqueName: \"kubernetes.io/projected/8b479413-73f2-4159-8ec6-5e23f139c53c-kube-api-access-tvlq4\") pod \"horizon-8fbb4f9c8-n8t5n\" (UID: \"8b479413-73f2-4159-8ec6-5e23f139c53c\") " pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.296117 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:07 crc kubenswrapper[4970]: I0930 10:03:07.486835 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Sep 30 10:03:12 crc kubenswrapper[4970]: I0930 10:03:12.487388 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Sep 30 10:03:17 crc kubenswrapper[4970]: E0930 10:03:17.391040 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Sep 30 10:03:17 crc kubenswrapper[4970]: E0930 10:03:17.392198 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x8xnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-2n6k8_openstack(6ea9f861-9877-480f-a490-08c80d2580cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:17 crc kubenswrapper[4970]: E0930 10:03:17.393453 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-2n6k8" podUID="6ea9f861-9877-480f-a490-08c80d2580cf" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.486849 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: connect: connection refused" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.487056 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.496106 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561178 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-combined-ca-bundle\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561250 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmcb2\" (UniqueName: \"kubernetes.io/projected/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-kube-api-access-xmcb2\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561488 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-httpd-run\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561558 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-config-data\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561710 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561813 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-logs\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.561857 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-scripts\") pod \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\" (UID: \"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345\") " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.562602 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.562746 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-logs" (OuterVolumeSpecName: "logs") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.571653 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-kube-api-access-xmcb2" (OuterVolumeSpecName: "kube-api-access-xmcb2") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "kube-api-access-xmcb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.574184 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.576262 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-scripts" (OuterVolumeSpecName: "scripts") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.596556 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.622796 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-config-data" (OuterVolumeSpecName: "config-data") pod "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" (UID: "32f981de-1d7a-4d2f-8e3d-6f9d07dd7345"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664729 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664813 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664827 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664839 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664857 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmcb2\" (UniqueName: \"kubernetes.io/projected/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-kube-api-access-xmcb2\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664868 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.664881 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.689403 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.766818 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.828680 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"32f981de-1d7a-4d2f-8e3d-6f9d07dd7345","Type":"ContainerDied","Data":"175d8d374692f22f9686d527a1b0369f1b6ddb7cec731d1907c7ba9c54d973b0"} Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.828759 4970 scope.go:117] "RemoveContainer" containerID="9afbd82f69258f2c8c059e75f0bcbbf0e9aaa737bb6873ed0a1336cff9d0e679" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.828802 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:17 crc kubenswrapper[4970]: E0930 10:03:17.846710 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-2n6k8" podUID="6ea9f861-9877-480f-a490-08c80d2580cf" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.893977 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.902952 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.913303 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:03:17 crc kubenswrapper[4970]: E0930 10:03:17.913878 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-log" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.913905 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-log" Sep 30 10:03:17 crc kubenswrapper[4970]: E0930 10:03:17.913914 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-httpd" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.913921 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-httpd" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.914152 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-log" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.914190 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" containerName="glance-httpd" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.915299 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.920782 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.921629 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 10:03:17 crc kubenswrapper[4970]: I0930 10:03:17.926367 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.079648 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-logs\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.079786 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rnkj\" (UniqueName: \"kubernetes.io/projected/4048eb6d-7c40-4a50-9fba-d253cb710ee6-kube-api-access-7rnkj\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.079855 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.079921 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.079961 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.080031 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.080125 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.080188 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182443 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-logs\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182499 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rnkj\" (UniqueName: \"kubernetes.io/projected/4048eb6d-7c40-4a50-9fba-d253cb710ee6-kube-api-access-7rnkj\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182544 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182571 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182591 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182610 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182679 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.182716 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.183165 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.183240 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.183191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-logs\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.188541 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.189013 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.189738 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.189982 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.200871 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rnkj\" (UniqueName: \"kubernetes.io/projected/4048eb6d-7c40-4a50-9fba-d253cb710ee6-kube-api-access-7rnkj\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.210568 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:03:18 crc kubenswrapper[4970]: I0930 10:03:18.240192 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:19 crc kubenswrapper[4970]: I0930 10:03:19.682420 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32f981de-1d7a-4d2f-8e3d-6f9d07dd7345" path="/var/lib/kubelet/pods/32f981de-1d7a-4d2f-8e3d-6f9d07dd7345/volumes" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.398593 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.399377 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nch555h5d6h94h64h566h68hdch5b8h56bh67bh5f6h669h658h67bh5d9h85h578h597h77h646h84h576h67dh689h5d5h85h5f8h575h559h57fh549q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gh6lp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b5fcb5c4f-mwhb7_openstack(8e8b2c9b-7b7b-4650-9e5a-ae69390ba454): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.402085 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b5fcb5c4f-mwhb7" podUID="8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.406723 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.407012 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n4hb9h98h87hbh5c6h7fh66ch95h647h5d7h665h594h598h5bch668h55bh5h564h77h5cdhf9h5bhd7h574hb4h77h677h88h99h646h74q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8p8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-79bc49b8b5-jrq6x_openstack(3b6077a3-21ee-4b10-8f3b-dfefdad1c51c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.409365 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-79bc49b8b5-jrq6x" podUID="3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.442297 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.442499 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nc5h57dh679hc4h694h5bbh655h58dh567hb6h64bh67bh6fh5fh569hc4h669h679h5cch578h5c9h665h5dh689h646h666hcbhcdh5c5h644h96h667q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8mzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6dddd5dd5-pln65_openstack(b3014af0-223a-4f93-9f31-4782c041da84): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:22 crc kubenswrapper[4970]: E0930 10:03:22.445213 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6dddd5dd5-pln65" podUID="b3014af0-223a-4f93-9f31-4782c041da84" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.503478 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.516539 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579229 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-scripts\") pod \"13efae10-c96e-40f3-8f76-7091e463f19d\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579385 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-config-data\") pod \"13efae10-c96e-40f3-8f76-7091e463f19d\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579413 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-combined-ca-bundle\") pod \"13efae10-c96e-40f3-8f76-7091e463f19d\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579471 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpgwb\" (UniqueName: \"kubernetes.io/projected/2cbb3863-af8c-4cba-9557-a02d280dd1c7-kube-api-access-bpgwb\") pod \"2cbb3863-af8c-4cba-9557-a02d280dd1c7\" (UID: \"2cbb3863-af8c-4cba-9557-a02d280dd1c7\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579514 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-credential-keys\") pod \"13efae10-c96e-40f3-8f76-7091e463f19d\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579579 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-fernet-keys\") pod \"13efae10-c96e-40f3-8f76-7091e463f19d\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.579704 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvp4g\" (UniqueName: \"kubernetes.io/projected/13efae10-c96e-40f3-8f76-7091e463f19d-kube-api-access-qvp4g\") pod \"13efae10-c96e-40f3-8f76-7091e463f19d\" (UID: \"13efae10-c96e-40f3-8f76-7091e463f19d\") " Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.587870 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "13efae10-c96e-40f3-8f76-7091e463f19d" (UID: "13efae10-c96e-40f3-8f76-7091e463f19d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.587890 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "13efae10-c96e-40f3-8f76-7091e463f19d" (UID: "13efae10-c96e-40f3-8f76-7091e463f19d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.590621 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cbb3863-af8c-4cba-9557-a02d280dd1c7-kube-api-access-bpgwb" (OuterVolumeSpecName: "kube-api-access-bpgwb") pod "2cbb3863-af8c-4cba-9557-a02d280dd1c7" (UID: "2cbb3863-af8c-4cba-9557-a02d280dd1c7"). InnerVolumeSpecName "kube-api-access-bpgwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.593192 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-scripts" (OuterVolumeSpecName: "scripts") pod "13efae10-c96e-40f3-8f76-7091e463f19d" (UID: "13efae10-c96e-40f3-8f76-7091e463f19d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.593297 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13efae10-c96e-40f3-8f76-7091e463f19d-kube-api-access-qvp4g" (OuterVolumeSpecName: "kube-api-access-qvp4g") pod "13efae10-c96e-40f3-8f76-7091e463f19d" (UID: "13efae10-c96e-40f3-8f76-7091e463f19d"). InnerVolumeSpecName "kube-api-access-qvp4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.617375 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-config-data" (OuterVolumeSpecName: "config-data") pod "13efae10-c96e-40f3-8f76-7091e463f19d" (UID: "13efae10-c96e-40f3-8f76-7091e463f19d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.631293 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "13efae10-c96e-40f3-8f76-7091e463f19d" (UID: "13efae10-c96e-40f3-8f76-7091e463f19d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682678 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682725 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682738 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpgwb\" (UniqueName: \"kubernetes.io/projected/2cbb3863-af8c-4cba-9557-a02d280dd1c7-kube-api-access-bpgwb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682747 4970 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682755 4970 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682763 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvp4g\" (UniqueName: \"kubernetes.io/projected/13efae10-c96e-40f3-8f76-7091e463f19d-kube-api-access-qvp4g\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.682771 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/13efae10-c96e-40f3-8f76-7091e463f19d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.880546 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kkvb8" event={"ID":"13efae10-c96e-40f3-8f76-7091e463f19d","Type":"ContainerDied","Data":"94f83567a1981e2f5020f7e41da2b87eafb07f7df5e7aff1eec095ef23ebe441"} Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.880593 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94f83567a1981e2f5020f7e41da2b87eafb07f7df5e7aff1eec095ef23ebe441" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.880606 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kkvb8" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.882520 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d622-account-create-7hnv9" event={"ID":"2cbb3863-af8c-4cba-9557-a02d280dd1c7","Type":"ContainerDied","Data":"ceec66bdca4f542755ce01eb6dfa46d67568c2df50429756a62e0b39a710e5cd"} Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.882760 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceec66bdca4f542755ce01eb6dfa46d67568c2df50429756a62e0b39a710e5cd" Sep 30 10:03:22 crc kubenswrapper[4970]: I0930 10:03:22.882698 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d622-account-create-7hnv9" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.613713 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kkvb8"] Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.621217 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kkvb8"] Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.685160 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13efae10-c96e-40f3-8f76-7091e463f19d" path="/var/lib/kubelet/pods/13efae10-c96e-40f3-8f76-7091e463f19d/volumes" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.717688 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lhrsc"] Sep 30 10:03:23 crc kubenswrapper[4970]: E0930 10:03:23.718246 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13efae10-c96e-40f3-8f76-7091e463f19d" containerName="keystone-bootstrap" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.718275 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="13efae10-c96e-40f3-8f76-7091e463f19d" containerName="keystone-bootstrap" Sep 30 10:03:23 crc kubenswrapper[4970]: E0930 10:03:23.718309 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cbb3863-af8c-4cba-9557-a02d280dd1c7" containerName="mariadb-account-create" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.718319 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cbb3863-af8c-4cba-9557-a02d280dd1c7" containerName="mariadb-account-create" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.718521 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cbb3863-af8c-4cba-9557-a02d280dd1c7" containerName="mariadb-account-create" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.718541 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="13efae10-c96e-40f3-8f76-7091e463f19d" containerName="keystone-bootstrap" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.719196 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.722914 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.723214 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kcbp" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.725508 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lhrsc"] Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.722823 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.739350 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.807349 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-fernet-keys\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.807455 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-scripts\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.807621 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-combined-ca-bundle\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.807659 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-credential-keys\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.807709 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b9rr\" (UniqueName: \"kubernetes.io/projected/1e80a6b3-9edf-4984-b37c-80940382be1e-kube-api-access-2b9rr\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.807736 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-config-data\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.909759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b9rr\" (UniqueName: \"kubernetes.io/projected/1e80a6b3-9edf-4984-b37c-80940382be1e-kube-api-access-2b9rr\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.909826 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-config-data\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.909854 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-fernet-keys\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.909892 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-scripts\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.909954 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-combined-ca-bundle\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.910005 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-credential-keys\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.916706 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-scripts\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.918888 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-config-data\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.919115 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-combined-ca-bundle\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.919266 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-fernet-keys\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.920539 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-credential-keys\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:23 crc kubenswrapper[4970]: I0930 10:03:23.929069 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b9rr\" (UniqueName: \"kubernetes.io/projected/1e80a6b3-9edf-4984-b37c-80940382be1e-kube-api-access-2b9rr\") pod \"keystone-bootstrap-lhrsc\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.011299 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ll2vt"] Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.013117 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.017564 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.017669 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.018302 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sg5bl" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.026901 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ll2vt"] Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.048508 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.114390 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbxxj\" (UniqueName: \"kubernetes.io/projected/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-kube-api-access-pbxxj\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.114461 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-combined-ca-bundle\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.114546 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-config\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: E0930 10:03:24.186546 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Sep 30 10:03:24 crc kubenswrapper[4970]: E0930 10:03:24.186808 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2vjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-zdzz8_openstack(3d62c32c-2c57-455b-9d92-6add27e33831): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:24 crc kubenswrapper[4970]: E0930 10:03:24.188080 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-zdzz8" podUID="3d62c32c-2c57-455b-9d92-6add27e33831" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.216125 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbxxj\" (UniqueName: \"kubernetes.io/projected/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-kube-api-access-pbxxj\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.216194 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-combined-ca-bundle\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.216250 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-config\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.221031 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-config\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.221512 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-combined-ca-bundle\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.233042 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbxxj\" (UniqueName: \"kubernetes.io/projected/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-kube-api-access-pbxxj\") pod \"neutron-db-sync-ll2vt\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.281314 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.344590 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.420746 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b3014af0-223a-4f93-9f31-4782c041da84-horizon-secret-key\") pod \"b3014af0-223a-4f93-9f31-4782c041da84\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.420879 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-scripts\") pod \"b3014af0-223a-4f93-9f31-4782c041da84\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.421204 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-config-data\") pod \"b3014af0-223a-4f93-9f31-4782c041da84\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.421264 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3014af0-223a-4f93-9f31-4782c041da84-logs\") pod \"b3014af0-223a-4f93-9f31-4782c041da84\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.421338 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8mzl\" (UniqueName: \"kubernetes.io/projected/b3014af0-223a-4f93-9f31-4782c041da84-kube-api-access-p8mzl\") pod \"b3014af0-223a-4f93-9f31-4782c041da84\" (UID: \"b3014af0-223a-4f93-9f31-4782c041da84\") " Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.421880 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3014af0-223a-4f93-9f31-4782c041da84-logs" (OuterVolumeSpecName: "logs") pod "b3014af0-223a-4f93-9f31-4782c041da84" (UID: "b3014af0-223a-4f93-9f31-4782c041da84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.422145 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-config-data" (OuterVolumeSpecName: "config-data") pod "b3014af0-223a-4f93-9f31-4782c041da84" (UID: "b3014af0-223a-4f93-9f31-4782c041da84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.422529 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.422598 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3014af0-223a-4f93-9f31-4782c041da84-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.423087 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-scripts" (OuterVolumeSpecName: "scripts") pod "b3014af0-223a-4f93-9f31-4782c041da84" (UID: "b3014af0-223a-4f93-9f31-4782c041da84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.425213 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3014af0-223a-4f93-9f31-4782c041da84-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b3014af0-223a-4f93-9f31-4782c041da84" (UID: "b3014af0-223a-4f93-9f31-4782c041da84"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.426662 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3014af0-223a-4f93-9f31-4782c041da84-kube-api-access-p8mzl" (OuterVolumeSpecName: "kube-api-access-p8mzl") pod "b3014af0-223a-4f93-9f31-4782c041da84" (UID: "b3014af0-223a-4f93-9f31-4782c041da84"). InnerVolumeSpecName "kube-api-access-p8mzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.524802 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8mzl\" (UniqueName: \"kubernetes.io/projected/b3014af0-223a-4f93-9f31-4782c041da84-kube-api-access-p8mzl\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.524862 4970 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b3014af0-223a-4f93-9f31-4782c041da84-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.524879 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b3014af0-223a-4f93-9f31-4782c041da84-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:24 crc kubenswrapper[4970]: E0930 10:03:24.622303 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Sep 30 10:03:24 crc kubenswrapper[4970]: E0930 10:03:24.622601 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n67bh565hbbh599h65dh55ch5b5h5bch94h5fch675h68dh676hc4h5bchc9hf4h698h674h667h74h699h89hc8h7bh584h584h686h66fhcdh597h7q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nptcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.908709 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6dddd5dd5-pln65" event={"ID":"b3014af0-223a-4f93-9f31-4782c041da84","Type":"ContainerDied","Data":"fcbcfe76070a9fb2058bab1a2641dabd40170841319c089f6973805be11e00c8"} Sep 30 10:03:24 crc kubenswrapper[4970]: I0930 10:03:24.908968 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6dddd5dd5-pln65" Sep 30 10:03:24 crc kubenswrapper[4970]: E0930 10:03:24.910347 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-zdzz8" podUID="3d62c32c-2c57-455b-9d92-6add27e33831" Sep 30 10:03:25 crc kubenswrapper[4970]: I0930 10:03:25.008302 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6dddd5dd5-pln65"] Sep 30 10:03:25 crc kubenswrapper[4970]: I0930 10:03:25.019545 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6dddd5dd5-pln65"] Sep 30 10:03:25 crc kubenswrapper[4970]: I0930 10:03:25.687920 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3014af0-223a-4f93-9f31-4782c041da84" path="/var/lib/kubelet/pods/b3014af0-223a-4f93-9f31-4782c041da84/volumes" Sep 30 10:03:26 crc kubenswrapper[4970]: E0930 10:03:26.006358 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Sep 30 10:03:26 crc kubenswrapper[4970]: E0930 10:03:26.006582 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8bmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4pw5n_openstack(ad24190f-4eb6-49c8-bad6-c33a817cd9c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:03:26 crc kubenswrapper[4970]: E0930 10:03:26.009731 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4pw5n" podUID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.055058 4970 scope.go:117] "RemoveContainer" containerID="72a546b580bc4dc4875cc65e439184d75c7ee8a23194e736b52e15fa66fb1e86" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.277025 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.279615 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.286169 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.374982 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-logs\") pod \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375077 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-scripts\") pod \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375133 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-config-data\") pod \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375202 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh6lp\" (UniqueName: \"kubernetes.io/projected/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-kube-api-access-gh6lp\") pod \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375255 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-sb\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375284 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-config-data\") pod \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375311 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-swift-storage-0\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375346 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-svc\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375383 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-horizon-secret-key\") pod \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375449 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8p8k\" (UniqueName: \"kubernetes.io/projected/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-kube-api-access-n8p8k\") pod \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375479 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-logs\") pod \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375502 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-horizon-secret-key\") pod \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\" (UID: \"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375524 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375568 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxtgz\" (UniqueName: \"kubernetes.io/projected/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-kube-api-access-rxtgz\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375648 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-nb\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.375708 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-scripts\") pod \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\" (UID: \"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c\") " Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.377208 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-scripts" (OuterVolumeSpecName: "scripts") pod "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" (UID: "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.377736 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-logs" (OuterVolumeSpecName: "logs") pod "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" (UID: "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.382485 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-config-data" (OuterVolumeSpecName: "config-data") pod "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" (UID: "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.383770 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-scripts" (OuterVolumeSpecName: "scripts") pod "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" (UID: "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.384574 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-logs" (OuterVolumeSpecName: "logs") pod "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" (UID: "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.385652 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-config-data" (OuterVolumeSpecName: "config-data") pod "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" (UID: "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.406062 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-kube-api-access-gh6lp" (OuterVolumeSpecName: "kube-api-access-gh6lp") pod "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" (UID: "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454"). InnerVolumeSpecName "kube-api-access-gh6lp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.406208 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-kube-api-access-rxtgz" (OuterVolumeSpecName: "kube-api-access-rxtgz") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "kube-api-access-rxtgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.406867 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-kube-api-access-n8p8k" (OuterVolumeSpecName: "kube-api-access-n8p8k") pod "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" (UID: "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c"). InnerVolumeSpecName "kube-api-access-n8p8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.407597 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" (UID: "8e8b2c9b-7b7b-4650-9e5a-ae69390ba454"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.408252 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" (UID: "3b6077a3-21ee-4b10-8f3b-dfefdad1c51c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.446487 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.452942 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.471866 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.472459 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.477372 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config" (OuterVolumeSpecName: "config") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.478951 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config\") pod \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\" (UID: \"bf98d493-f44f-4cd9-ab10-4a5a132ce94f\") " Sep 30 10:03:26 crc kubenswrapper[4970]: W0930 10:03:26.479154 4970 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/bf98d493-f44f-4cd9-ab10-4a5a132ce94f/volumes/kubernetes.io~configmap/config Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.479173 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config" (OuterVolumeSpecName: "config") pod "bf98d493-f44f-4cd9-ab10-4a5a132ce94f" (UID: "bf98d493-f44f-4cd9-ab10-4a5a132ce94f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481090 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8p8k\" (UniqueName: \"kubernetes.io/projected/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-kube-api-access-n8p8k\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481205 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481301 4970 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481373 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481437 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxtgz\" (UniqueName: \"kubernetes.io/projected/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-kube-api-access-rxtgz\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481510 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481574 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481646 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481727 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481807 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.481885 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh6lp\" (UniqueName: \"kubernetes.io/projected/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454-kube-api-access-gh6lp\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.482000 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.482088 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.482176 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.482263 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bf98d493-f44f-4cd9-ab10-4a5a132ce94f-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.482345 4970 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.637615 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-8fbb4f9c8-n8t5n"] Sep 30 10:03:26 crc kubenswrapper[4970]: W0930 10:03:26.651971 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b479413_73f2_4159_8ec6_5e23f139c53c.slice/crio-9235f07abe9ed33f5a2e8b5ee42a6a01fd3e47ca2f9aecc4d85ec285f2f902ad WatchSource:0}: Error finding container 9235f07abe9ed33f5a2e8b5ee42a6a01fd3e47ca2f9aecc4d85ec285f2f902ad: Status 404 returned error can't find the container with id 9235f07abe9ed33f5a2e8b5ee42a6a01fd3e47ca2f9aecc4d85ec285f2f902ad Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.683009 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f6674bb4b-gp2wp"] Sep 30 10:03:26 crc kubenswrapper[4970]: W0930 10:03:26.695083 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48bd5d72_868c_4d81_9d1f_6a03ba997169.slice/crio-cf4d95d501312c144c9a2b103bdf725d132bc371b34fa99b2a6e1cf4ec4619ca WatchSource:0}: Error finding container cf4d95d501312c144c9a2b103bdf725d132bc371b34fa99b2a6e1cf4ec4619ca: Status 404 returned error can't find the container with id cf4d95d501312c144c9a2b103bdf725d132bc371b34fa99b2a6e1cf4ec4619ca Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.821867 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ll2vt"] Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.837485 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lhrsc"] Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.928512 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d890158e-f620-412b-ad4f-11437ded0689","Type":"ContainerStarted","Data":"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e"} Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.928723 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-log" containerID="cri-o://29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60" gracePeriod=30 Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.928768 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-httpd" containerID="cri-o://65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e" gracePeriod=30 Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.932211 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b5fcb5c4f-mwhb7" event={"ID":"8e8b2c9b-7b7b-4650-9e5a-ae69390ba454","Type":"ContainerDied","Data":"27cbc97964b720c4f162d41b4b4336d3dd976ae4825d4b98c8c4bb939d228b74"} Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.932241 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b5fcb5c4f-mwhb7" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.933437 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6674bb4b-gp2wp" event={"ID":"48bd5d72-868c-4d81-9d1f-6a03ba997169","Type":"ContainerStarted","Data":"cf4d95d501312c144c9a2b103bdf725d132bc371b34fa99b2a6e1cf4ec4619ca"} Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.941522 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" event={"ID":"bf98d493-f44f-4cd9-ab10-4a5a132ce94f","Type":"ContainerDied","Data":"e52081279786d28c2c2ec621e222f27ac2e7cf218d315edee5cb2750d6a78087"} Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.941606 4970 scope.go:117] "RemoveContainer" containerID="1e25e16d357f4992d1d3716ee20aeb8544e0462676b347425ccf853363c5e9d7" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.941818 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.945983 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8fbb4f9c8-n8t5n" event={"ID":"8b479413-73f2-4159-8ec6-5e23f139c53c","Type":"ContainerStarted","Data":"9235f07abe9ed33f5a2e8b5ee42a6a01fd3e47ca2f9aecc4d85ec285f2f902ad"} Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.952709 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-79bc49b8b5-jrq6x" Sep 30 10:03:26 crc kubenswrapper[4970]: E0930 10:03:26.958029 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4pw5n" podUID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.958212 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-79bc49b8b5-jrq6x" event={"ID":"3b6077a3-21ee-4b10-8f3b-dfefdad1c51c","Type":"ContainerDied","Data":"4768c12b6baa4e168391025a7ab58d049b287fc11c3348a50987dece9f8d5641"} Sep 30 10:03:26 crc kubenswrapper[4970]: I0930 10:03:26.962684 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=27.962629361 podStartE2EDuration="27.962629361s" podCreationTimestamp="2025-09-30 10:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:26.958581773 +0000 UTC m=+1020.030432707" watchObservedRunningTime="2025-09-30 10:03:26.962629361 +0000 UTC m=+1020.034480295" Sep 30 10:03:27 crc kubenswrapper[4970]: W0930 10:03:27.100299 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e80a6b3_9edf_4984_b37c_80940382be1e.slice/crio-467d419c97530cc798492e9608e13de3184a1fe4c88c3bb515a135ab27cd5b4d WatchSource:0}: Error finding container 467d419c97530cc798492e9608e13de3184a1fe4c88c3bb515a135ab27cd5b4d: Status 404 returned error can't find the container with id 467d419c97530cc798492e9608e13de3184a1fe4c88c3bb515a135ab27cd5b4d Sep 30 10:03:27 crc kubenswrapper[4970]: W0930 10:03:27.110975 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab2e1257_9947_41e6_8c2e_a366f4ea4c47.slice/crio-78f618dabd9b59fa58b15c6f76cc91d1e3b3dd4a602a4b0f518cb585a548c17a WatchSource:0}: Error finding container 78f618dabd9b59fa58b15c6f76cc91d1e3b3dd4a602a4b0f518cb585a548c17a: Status 404 returned error can't find the container with id 78f618dabd9b59fa58b15c6f76cc91d1e3b3dd4a602a4b0f518cb585a548c17a Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.188823 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:03:27 crc kubenswrapper[4970]: W0930 10:03:27.205694 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4048eb6d_7c40_4a50_9fba_d253cb710ee6.slice/crio-03dc2c4ac4554ac00d041777561be6390bff9473b6f0891a6bb5640dee1b5bf0 WatchSource:0}: Error finding container 03dc2c4ac4554ac00d041777561be6390bff9473b6f0891a6bb5640dee1b5bf0: Status 404 returned error can't find the container with id 03dc2c4ac4554ac00d041777561be6390bff9473b6f0891a6bb5640dee1b5bf0 Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.263715 4970 scope.go:117] "RemoveContainer" containerID="3c739f81b3d82acc27285f57dd7ebe38925896c69ca9b1c487249e728e58fd46" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.270717 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-x6dpl"] Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.281240 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-x6dpl"] Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.311458 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b5fcb5c4f-mwhb7"] Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.320410 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b5fcb5c4f-mwhb7"] Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.349659 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-79bc49b8b5-jrq6x"] Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.356512 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-79bc49b8b5-jrq6x"] Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.487509 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-x6dpl" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.126:5353: i/o timeout" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.555654 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.610806 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-logs\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.610852 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.610898 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-httpd-run\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.611081 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-scripts\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.611451 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.611602 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wttmj\" (UniqueName: \"kubernetes.io/projected/d890158e-f620-412b-ad4f-11437ded0689-kube-api-access-wttmj\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.611753 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-combined-ca-bundle\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.611936 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-config-data\") pod \"d890158e-f620-412b-ad4f-11437ded0689\" (UID: \"d890158e-f620-412b-ad4f-11437ded0689\") " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.612781 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-logs" (OuterVolumeSpecName: "logs") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.613354 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.613376 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d890158e-f620-412b-ad4f-11437ded0689-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.617002 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d890158e-f620-412b-ad4f-11437ded0689-kube-api-access-wttmj" (OuterVolumeSpecName: "kube-api-access-wttmj") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "kube-api-access-wttmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.617562 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.617801 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-scripts" (OuterVolumeSpecName: "scripts") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.643260 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.673428 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-config-data" (OuterVolumeSpecName: "config-data") pod "d890158e-f620-412b-ad4f-11437ded0689" (UID: "d890158e-f620-412b-ad4f-11437ded0689"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.698592 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b6077a3-21ee-4b10-8f3b-dfefdad1c51c" path="/var/lib/kubelet/pods/3b6077a3-21ee-4b10-8f3b-dfefdad1c51c/volumes" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.700201 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8b2c9b-7b7b-4650-9e5a-ae69390ba454" path="/var/lib/kubelet/pods/8e8b2c9b-7b7b-4650-9e5a-ae69390ba454/volumes" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.700693 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" path="/var/lib/kubelet/pods/bf98d493-f44f-4cd9-ab10-4a5a132ce94f/volumes" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.714902 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wttmj\" (UniqueName: \"kubernetes.io/projected/d890158e-f620-412b-ad4f-11437ded0689-kube-api-access-wttmj\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.714929 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.714939 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.714965 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.714974 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d890158e-f620-412b-ad4f-11437ded0689-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.751663 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.817036 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.961913 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4048eb6d-7c40-4a50-9fba-d253cb710ee6","Type":"ContainerStarted","Data":"03dc2c4ac4554ac00d041777561be6390bff9473b6f0891a6bb5640dee1b5bf0"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964429 4970 generic.go:334] "Generic (PLEG): container finished" podID="d890158e-f620-412b-ad4f-11437ded0689" containerID="65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e" exitCode=143 Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964475 4970 generic.go:334] "Generic (PLEG): container finished" podID="d890158e-f620-412b-ad4f-11437ded0689" containerID="29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60" exitCode=143 Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964564 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d890158e-f620-412b-ad4f-11437ded0689","Type":"ContainerDied","Data":"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964687 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d890158e-f620-412b-ad4f-11437ded0689","Type":"ContainerDied","Data":"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964700 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"d890158e-f620-412b-ad4f-11437ded0689","Type":"ContainerDied","Data":"660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964722 4970 scope.go:117] "RemoveContainer" containerID="65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.964906 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.968164 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6674bb4b-gp2wp" event={"ID":"48bd5d72-868c-4d81-9d1f-6a03ba997169","Type":"ContainerStarted","Data":"d403e368d182209dd7a414661db3f68d40f32e6e8d7e33e45fb6614d5c1c8d68"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.968227 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6674bb4b-gp2wp" event={"ID":"48bd5d72-868c-4d81-9d1f-6a03ba997169","Type":"ContainerStarted","Data":"09566df69d0b238347b6a02d51e9d164ccc3a9001a0cbb3925c87d0cac561a81"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.971743 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lhrsc" event={"ID":"1e80a6b3-9edf-4984-b37c-80940382be1e","Type":"ContainerStarted","Data":"48bf0db9c01b8ebefa87fdb71af9c0ba8a31ce3769f6800a86035d3f4b1f3661"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.971789 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lhrsc" event={"ID":"1e80a6b3-9edf-4984-b37c-80940382be1e","Type":"ContainerStarted","Data":"467d419c97530cc798492e9608e13de3184a1fe4c88c3bb515a135ab27cd5b4d"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.975633 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ll2vt" event={"ID":"ab2e1257-9947-41e6-8c2e-a366f4ea4c47","Type":"ContainerStarted","Data":"214925743b47809ebb0e3c09c776ecd30126704a55bd0862df1d1b312c361c57"} Sep 30 10:03:27 crc kubenswrapper[4970]: I0930 10:03:27.975685 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ll2vt" event={"ID":"ab2e1257-9947-41e6-8c2e-a366f4ea4c47","Type":"ContainerStarted","Data":"78f618dabd9b59fa58b15c6f76cc91d1e3b3dd4a602a4b0f518cb585a548c17a"} Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.008720 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8fbb4f9c8-n8t5n" event={"ID":"8b479413-73f2-4159-8ec6-5e23f139c53c","Type":"ContainerStarted","Data":"eb400fae9c3f4d99010f3bd9fa9a7d44f4b16f475ce783887540989104500132"} Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.008783 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-8fbb4f9c8-n8t5n" event={"ID":"8b479413-73f2-4159-8ec6-5e23f139c53c","Type":"ContainerStarted","Data":"f0e70588fc189a6ff61619e1a900af0bcb62742e883f5b34a88cb5524443a764"} Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.014207 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6f6674bb4b-gp2wp" podStartSLOduration=21.334054344 podStartE2EDuration="22.014180403s" podCreationTimestamp="2025-09-30 10:03:06 +0000 UTC" firstStartedPulling="2025-09-30 10:03:26.699754825 +0000 UTC m=+1019.771605759" lastFinishedPulling="2025-09-30 10:03:27.379880884 +0000 UTC m=+1020.451731818" observedRunningTime="2025-09-30 10:03:28.004413513 +0000 UTC m=+1021.076264447" watchObservedRunningTime="2025-09-30 10:03:28.014180403 +0000 UTC m=+1021.086031337" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.029426 4970 scope.go:117] "RemoveContainer" containerID="29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.034353 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerStarted","Data":"517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0"} Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.037961 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lhrsc" podStartSLOduration=5.037946946 podStartE2EDuration="5.037946946s" podCreationTimestamp="2025-09-30 10:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:28.031174595 +0000 UTC m=+1021.103025529" watchObservedRunningTime="2025-09-30 10:03:28.037946946 +0000 UTC m=+1021.109797880" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.056875 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.066404 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.110620 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:28 crc kubenswrapper[4970]: E0930 10:03:28.111325 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-log" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.111423 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-log" Sep 30 10:03:28 crc kubenswrapper[4970]: E0930 10:03:28.111487 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="init" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.111606 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="init" Sep 30 10:03:28 crc kubenswrapper[4970]: E0930 10:03:28.111671 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.114845 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" Sep 30 10:03:28 crc kubenswrapper[4970]: E0930 10:03:28.114938 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-httpd" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.115014 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-httpd" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.115322 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-log" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.115416 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf98d493-f44f-4cd9-ab10-4a5a132ce94f" containerName="dnsmasq-dns" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.115492 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d890158e-f620-412b-ad4f-11437ded0689" containerName="glance-httpd" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.117573 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.121155 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.121535 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.133763 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-8fbb4f9c8-n8t5n" podStartSLOduration=21.510745806 podStartE2EDuration="22.133744485s" podCreationTimestamp="2025-09-30 10:03:06 +0000 UTC" firstStartedPulling="2025-09-30 10:03:26.656527705 +0000 UTC m=+1019.728378649" lastFinishedPulling="2025-09-30 10:03:27.279526394 +0000 UTC m=+1020.351377328" observedRunningTime="2025-09-30 10:03:28.075775782 +0000 UTC m=+1021.147626716" watchObservedRunningTime="2025-09-30 10:03:28.133744485 +0000 UTC m=+1021.205595419" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.161480 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ll2vt" podStartSLOduration=5.161450492 podStartE2EDuration="5.161450492s" podCreationTimestamp="2025-09-30 10:03:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:28.105044991 +0000 UTC m=+1021.176895915" watchObservedRunningTime="2025-09-30 10:03:28.161450492 +0000 UTC m=+1021.233301426" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.171078 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.234755 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.235211 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.235254 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/ce89199d-435e-4383-a28f-c6326ec1f954-kube-api-access-pp9k9\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.235340 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.235376 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.235431 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.235520 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.237508 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-logs\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.240373 4970 scope.go:117] "RemoveContainer" containerID="65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e" Sep 30 10:03:28 crc kubenswrapper[4970]: E0930 10:03:28.240920 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e\": container with ID starting with 65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e not found: ID does not exist" containerID="65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.240955 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e"} err="failed to get container status \"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e\": rpc error: code = NotFound desc = could not find container \"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e\": container with ID starting with 65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e not found: ID does not exist" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.241022 4970 scope.go:117] "RemoveContainer" containerID="29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60" Sep 30 10:03:28 crc kubenswrapper[4970]: E0930 10:03:28.241418 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60\": container with ID starting with 29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60 not found: ID does not exist" containerID="29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.241449 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60"} err="failed to get container status \"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60\": rpc error: code = NotFound desc = could not find container \"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60\": container with ID starting with 29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60 not found: ID does not exist" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.241468 4970 scope.go:117] "RemoveContainer" containerID="65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.241919 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e"} err="failed to get container status \"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e\": rpc error: code = NotFound desc = could not find container \"65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e\": container with ID starting with 65ceed1a824135c11ab9b8d16146a93275fe2cd569de6fa4877ae0dc8493c73e not found: ID does not exist" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.241938 4970 scope.go:117] "RemoveContainer" containerID="29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.242647 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60"} err="failed to get container status \"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60\": rpc error: code = NotFound desc = could not find container \"29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60\": container with ID starting with 29ce9f15727847e4d2a96ce9943a91c30e712512321ccba6cfa59bf430a2fc60 not found: ID does not exist" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.339886 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.339947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.339967 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/ce89199d-435e-4383-a28f-c6326ec1f954-kube-api-access-pp9k9\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.340222 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.340247 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.340273 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.340314 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.340340 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-logs\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.341091 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.342510 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-logs\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.348443 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-scripts\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.355549 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.356810 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.359320 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.362703 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/ce89199d-435e-4383-a28f-c6326ec1f954-kube-api-access-pp9k9\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.365733 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-config-data\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.408668 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " pod="openstack/glance-default-external-api-0" Sep 30 10:03:28 crc kubenswrapper[4970]: I0930 10:03:28.588375 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:03:29 crc kubenswrapper[4970]: I0930 10:03:29.069501 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4048eb6d-7c40-4a50-9fba-d253cb710ee6","Type":"ContainerStarted","Data":"9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374"} Sep 30 10:03:29 crc kubenswrapper[4970]: I0930 10:03:29.069568 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4048eb6d-7c40-4a50-9fba-d253cb710ee6","Type":"ContainerStarted","Data":"a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4"} Sep 30 10:03:29 crc kubenswrapper[4970]: I0930 10:03:29.109516 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=12.109494251 podStartE2EDuration="12.109494251s" podCreationTimestamp="2025-09-30 10:03:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:29.102425053 +0000 UTC m=+1022.174275997" watchObservedRunningTime="2025-09-30 10:03:29.109494251 +0000 UTC m=+1022.181345185" Sep 30 10:03:29 crc kubenswrapper[4970]: I0930 10:03:29.303049 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:03:29 crc kubenswrapper[4970]: W0930 10:03:29.315234 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce89199d_435e_4383_a28f_c6326ec1f954.slice/crio-aa5fc410a14c6f3243e29d5ef7d438833c9edd85aed04a3f54093e2efaf917c5 WatchSource:0}: Error finding container aa5fc410a14c6f3243e29d5ef7d438833c9edd85aed04a3f54093e2efaf917c5: Status 404 returned error can't find the container with id aa5fc410a14c6f3243e29d5ef7d438833c9edd85aed04a3f54093e2efaf917c5 Sep 30 10:03:29 crc kubenswrapper[4970]: I0930 10:03:29.680301 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d890158e-f620-412b-ad4f-11437ded0689" path="/var/lib/kubelet/pods/d890158e-f620-412b-ad4f-11437ded0689/volumes" Sep 30 10:03:30 crc kubenswrapper[4970]: I0930 10:03:30.106276 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce89199d-435e-4383-a28f-c6326ec1f954","Type":"ContainerStarted","Data":"aa5fc410a14c6f3243e29d5ef7d438833c9edd85aed04a3f54093e2efaf917c5"} Sep 30 10:03:33 crc kubenswrapper[4970]: I0930 10:03:33.142722 4970 generic.go:334] "Generic (PLEG): container finished" podID="1e80a6b3-9edf-4984-b37c-80940382be1e" containerID="48bf0db9c01b8ebefa87fdb71af9c0ba8a31ce3769f6800a86035d3f4b1f3661" exitCode=0 Sep 30 10:03:33 crc kubenswrapper[4970]: I0930 10:03:33.142887 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lhrsc" event={"ID":"1e80a6b3-9edf-4984-b37c-80940382be1e","Type":"ContainerDied","Data":"48bf0db9c01b8ebefa87fdb71af9c0ba8a31ce3769f6800a86035d3f4b1f3661"} Sep 30 10:03:33 crc kubenswrapper[4970]: I0930 10:03:33.152447 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce89199d-435e-4383-a28f-c6326ec1f954","Type":"ContainerStarted","Data":"0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910"} Sep 30 10:03:33 crc kubenswrapper[4970]: E0930 10:03:33.432061 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d\": RecentStats: unable to find data in memory cache]" Sep 30 10:03:34 crc kubenswrapper[4970]: I0930 10:03:34.824176 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:03:34 crc kubenswrapper[4970]: I0930 10:03:34.824475 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.190672 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lhrsc" event={"ID":"1e80a6b3-9edf-4984-b37c-80940382be1e","Type":"ContainerDied","Data":"467d419c97530cc798492e9608e13de3184a1fe4c88c3bb515a135ab27cd5b4d"} Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.191072 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="467d419c97530cc798492e9608e13de3184a1fe4c88c3bb515a135ab27cd5b4d" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.260807 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.429458 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-credential-keys\") pod \"1e80a6b3-9edf-4984-b37c-80940382be1e\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.429512 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-combined-ca-bundle\") pod \"1e80a6b3-9edf-4984-b37c-80940382be1e\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.429542 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-config-data\") pod \"1e80a6b3-9edf-4984-b37c-80940382be1e\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.429622 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-scripts\") pod \"1e80a6b3-9edf-4984-b37c-80940382be1e\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.429710 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-fernet-keys\") pod \"1e80a6b3-9edf-4984-b37c-80940382be1e\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.429818 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b9rr\" (UniqueName: \"kubernetes.io/projected/1e80a6b3-9edf-4984-b37c-80940382be1e-kube-api-access-2b9rr\") pod \"1e80a6b3-9edf-4984-b37c-80940382be1e\" (UID: \"1e80a6b3-9edf-4984-b37c-80940382be1e\") " Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.434611 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "1e80a6b3-9edf-4984-b37c-80940382be1e" (UID: "1e80a6b3-9edf-4984-b37c-80940382be1e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.434703 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "1e80a6b3-9edf-4984-b37c-80940382be1e" (UID: "1e80a6b3-9edf-4984-b37c-80940382be1e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.434948 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-scripts" (OuterVolumeSpecName: "scripts") pod "1e80a6b3-9edf-4984-b37c-80940382be1e" (UID: "1e80a6b3-9edf-4984-b37c-80940382be1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.437581 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e80a6b3-9edf-4984-b37c-80940382be1e-kube-api-access-2b9rr" (OuterVolumeSpecName: "kube-api-access-2b9rr") pod "1e80a6b3-9edf-4984-b37c-80940382be1e" (UID: "1e80a6b3-9edf-4984-b37c-80940382be1e"). InnerVolumeSpecName "kube-api-access-2b9rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.460687 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e80a6b3-9edf-4984-b37c-80940382be1e" (UID: "1e80a6b3-9edf-4984-b37c-80940382be1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.467665 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-config-data" (OuterVolumeSpecName: "config-data") pod "1e80a6b3-9edf-4984-b37c-80940382be1e" (UID: "1e80a6b3-9edf-4984-b37c-80940382be1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.532164 4970 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-fernet-keys\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.532212 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b9rr\" (UniqueName: \"kubernetes.io/projected/1e80a6b3-9edf-4984-b37c-80940382be1e-kube-api-access-2b9rr\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.532228 4970 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-credential-keys\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.532240 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.532252 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:36 crc kubenswrapper[4970]: I0930 10:03:36.532265 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e80a6b3-9edf-4984-b37c-80940382be1e-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.201083 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerStarted","Data":"5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be"} Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.203105 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2n6k8" event={"ID":"6ea9f861-9877-480f-a490-08c80d2580cf","Type":"ContainerStarted","Data":"ae088867f7339eb9f50393931a94de209e64d3000981e959cd19fcd5a9b19886"} Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.206268 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce89199d-435e-4383-a28f-c6326ec1f954","Type":"ContainerStarted","Data":"ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5"} Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.206288 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lhrsc" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.233335 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.233670 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.238323 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.263514 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-2n6k8" podStartSLOduration=3.207626803 podStartE2EDuration="44.26349323s" podCreationTimestamp="2025-09-30 10:02:53 +0000 UTC" firstStartedPulling="2025-09-30 10:02:55.078752194 +0000 UTC m=+988.150603128" lastFinishedPulling="2025-09-30 10:03:36.134618601 +0000 UTC m=+1029.206469555" observedRunningTime="2025-09-30 10:03:37.229323341 +0000 UTC m=+1030.301174275" watchObservedRunningTime="2025-09-30 10:03:37.26349323 +0000 UTC m=+1030.335344164" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.265174 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.265165274 podStartE2EDuration="9.265165274s" podCreationTimestamp="2025-09-30 10:03:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:37.259139554 +0000 UTC m=+1030.330990488" watchObservedRunningTime="2025-09-30 10:03:37.265165274 +0000 UTC m=+1030.337016208" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.297378 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.297433 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.298922 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8fbb4f9c8-n8t5n" podUID="8b479413-73f2-4159-8ec6-5e23f139c53c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.392791 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6cf9989bfd-qxb2j"] Sep 30 10:03:37 crc kubenswrapper[4970]: E0930 10:03:37.393210 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e80a6b3-9edf-4984-b37c-80940382be1e" containerName="keystone-bootstrap" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.393230 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e80a6b3-9edf-4984-b37c-80940382be1e" containerName="keystone-bootstrap" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.393433 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e80a6b3-9edf-4984-b37c-80940382be1e" containerName="keystone-bootstrap" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.394076 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.397573 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.398002 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.398169 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-7kcbp" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.398303 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.399201 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.400492 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.417878 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf9989bfd-qxb2j"] Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.555135 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn7tf\" (UniqueName: \"kubernetes.io/projected/f1a85d9a-3eae-49ff-af87-14d444dec7d6-kube-api-access-cn7tf\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.555273 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-combined-ca-bundle\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.555443 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-scripts\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.555938 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-credential-keys\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.556030 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-config-data\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.556116 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-public-tls-certs\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.556160 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-fernet-keys\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.556219 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-internal-tls-certs\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658427 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-credential-keys\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658498 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-config-data\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-public-tls-certs\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658570 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-fernet-keys\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658604 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-internal-tls-certs\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658677 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn7tf\" (UniqueName: \"kubernetes.io/projected/f1a85d9a-3eae-49ff-af87-14d444dec7d6-kube-api-access-cn7tf\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658715 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-combined-ca-bundle\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.658766 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-scripts\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.664408 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-credential-keys\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.664758 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-internal-tls-certs\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.665611 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-config-data\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.674679 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-public-tls-certs\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.674876 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-combined-ca-bundle\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.675152 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-scripts\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.686930 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn7tf\" (UniqueName: \"kubernetes.io/projected/f1a85d9a-3eae-49ff-af87-14d444dec7d6-kube-api-access-cn7tf\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.691984 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f1a85d9a-3eae-49ff-af87-14d444dec7d6-fernet-keys\") pod \"keystone-6cf9989bfd-qxb2j\" (UID: \"f1a85d9a-3eae-49ff-af87-14d444dec7d6\") " pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:37 crc kubenswrapper[4970]: I0930 10:03:37.767061 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.226617 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zdzz8" event={"ID":"3d62c32c-2c57-455b-9d92-6add27e33831","Type":"ContainerStarted","Data":"4e91bd20cab6975f801c4ccaa9a7710e07f086b7c1118ecd7aa792569c099b37"} Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.240593 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.240933 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.258270 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-zdzz8" podStartSLOduration=3.972308012 podStartE2EDuration="46.258247941s" podCreationTimestamp="2025-09-30 10:02:52 +0000 UTC" firstStartedPulling="2025-09-30 10:02:54.993137196 +0000 UTC m=+988.064988120" lastFinishedPulling="2025-09-30 10:03:37.279077115 +0000 UTC m=+1030.350928049" observedRunningTime="2025-09-30 10:03:38.249498768 +0000 UTC m=+1031.321349722" watchObservedRunningTime="2025-09-30 10:03:38.258247941 +0000 UTC m=+1031.330098885" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.270901 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6cf9989bfd-qxb2j"] Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.321709 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.335827 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.590222 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.590265 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.645289 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 10:03:38 crc kubenswrapper[4970]: I0930 10:03:38.676072 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.251937 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf9989bfd-qxb2j" event={"ID":"f1a85d9a-3eae-49ff-af87-14d444dec7d6","Type":"ContainerStarted","Data":"e62373560514ec6bd90e0484dc9528f8c15f838eadb705072184c95b487b79da"} Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.252188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6cf9989bfd-qxb2j" event={"ID":"f1a85d9a-3eae-49ff-af87-14d444dec7d6","Type":"ContainerStarted","Data":"432edbacb53435e8624a2f3b09e5eb11722fb4299833fb4013b915d67431bd07"} Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.252203 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.252868 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.252882 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.252891 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.252900 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 10:03:39 crc kubenswrapper[4970]: I0930 10:03:39.280512 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6cf9989bfd-qxb2j" podStartSLOduration=2.280489043 podStartE2EDuration="2.280489043s" podCreationTimestamp="2025-09-30 10:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:39.278981743 +0000 UTC m=+1032.350832677" watchObservedRunningTime="2025-09-30 10:03:39.280489043 +0000 UTC m=+1032.352339977" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.269160 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4pw5n" event={"ID":"ad24190f-4eb6-49c8-bad6-c33a817cd9c6","Type":"ContainerStarted","Data":"1084cb733673f7ccfee88310d49c49131ab929fb6ac87c9df6ba6704eabce6b3"} Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.273033 4970 generic.go:334] "Generic (PLEG): container finished" podID="3d62c32c-2c57-455b-9d92-6add27e33831" containerID="4e91bd20cab6975f801c4ccaa9a7710e07f086b7c1118ecd7aa792569c099b37" exitCode=0 Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.273159 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.273222 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zdzz8" event={"ID":"3d62c32c-2c57-455b-9d92-6add27e33831","Type":"ContainerDied","Data":"4e91bd20cab6975f801c4ccaa9a7710e07f086b7c1118ecd7aa792569c099b37"} Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.273608 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.273629 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.281263 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.281423 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.296608 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4pw5n" podStartSLOduration=3.6513550710000002 podStartE2EDuration="48.296589201s" podCreationTimestamp="2025-09-30 10:02:53 +0000 UTC" firstStartedPulling="2025-09-30 10:02:55.071717337 +0000 UTC m=+988.143568271" lastFinishedPulling="2025-09-30 10:03:39.716951467 +0000 UTC m=+1032.788802401" observedRunningTime="2025-09-30 10:03:41.285509857 +0000 UTC m=+1034.357360811" watchObservedRunningTime="2025-09-30 10:03:41.296589201 +0000 UTC m=+1034.368440135" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.512961 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 10:03:41 crc kubenswrapper[4970]: I0930 10:03:41.515651 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 10:03:42 crc kubenswrapper[4970]: I0930 10:03:42.291682 4970 generic.go:334] "Generic (PLEG): container finished" podID="6ea9f861-9877-480f-a490-08c80d2580cf" containerID="ae088867f7339eb9f50393931a94de209e64d3000981e959cd19fcd5a9b19886" exitCode=0 Sep 30 10:03:42 crc kubenswrapper[4970]: I0930 10:03:42.291829 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2n6k8" event={"ID":"6ea9f861-9877-480f-a490-08c80d2580cf","Type":"ContainerDied","Data":"ae088867f7339eb9f50393931a94de209e64d3000981e959cd19fcd5a9b19886"} Sep 30 10:03:43 crc kubenswrapper[4970]: E0930 10:03:43.699510 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d\": RecentStats: unable to find data in memory cache]" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.314374 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-zdzz8" event={"ID":"3d62c32c-2c57-455b-9d92-6add27e33831","Type":"ContainerDied","Data":"0ba39ae898928cc0350ca162e28e95c246e662133930452493ee14ddae5703f9"} Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.314691 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ba39ae898928cc0350ca162e28e95c246e662133930452493ee14ddae5703f9" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.317174 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-2n6k8" event={"ID":"6ea9f861-9877-480f-a490-08c80d2580cf","Type":"ContainerDied","Data":"cd7ee2cbae010a65b1fe5f2b352ed4c8bf880428b6539b2cf191555040ac056f"} Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.317248 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd7ee2cbae010a65b1fe5f2b352ed4c8bf880428b6539b2cf191555040ac056f" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.338646 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.349663 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zdzz8" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506094 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-db-sync-config-data\") pod \"6ea9f861-9877-480f-a490-08c80d2580cf\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506144 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-config-data\") pod \"3d62c32c-2c57-455b-9d92-6add27e33831\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506197 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-combined-ca-bundle\") pod \"3d62c32c-2c57-455b-9d92-6add27e33831\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506268 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8xnm\" (UniqueName: \"kubernetes.io/projected/6ea9f861-9877-480f-a490-08c80d2580cf-kube-api-access-x8xnm\") pod \"6ea9f861-9877-480f-a490-08c80d2580cf\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506301 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-scripts\") pod \"3d62c32c-2c57-455b-9d92-6add27e33831\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506333 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d62c32c-2c57-455b-9d92-6add27e33831-logs\") pod \"3d62c32c-2c57-455b-9d92-6add27e33831\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506441 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2vjz\" (UniqueName: \"kubernetes.io/projected/3d62c32c-2c57-455b-9d92-6add27e33831-kube-api-access-z2vjz\") pod \"3d62c32c-2c57-455b-9d92-6add27e33831\" (UID: \"3d62c32c-2c57-455b-9d92-6add27e33831\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.506459 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-combined-ca-bundle\") pod \"6ea9f861-9877-480f-a490-08c80d2580cf\" (UID: \"6ea9f861-9877-480f-a490-08c80d2580cf\") " Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.507358 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d62c32c-2c57-455b-9d92-6add27e33831-logs" (OuterVolumeSpecName: "logs") pod "3d62c32c-2c57-455b-9d92-6add27e33831" (UID: "3d62c32c-2c57-455b-9d92-6add27e33831"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.513400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-scripts" (OuterVolumeSpecName: "scripts") pod "3d62c32c-2c57-455b-9d92-6add27e33831" (UID: "3d62c32c-2c57-455b-9d92-6add27e33831"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.513573 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea9f861-9877-480f-a490-08c80d2580cf-kube-api-access-x8xnm" (OuterVolumeSpecName: "kube-api-access-x8xnm") pod "6ea9f861-9877-480f-a490-08c80d2580cf" (UID: "6ea9f861-9877-480f-a490-08c80d2580cf"). InnerVolumeSpecName "kube-api-access-x8xnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.513861 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6ea9f861-9877-480f-a490-08c80d2580cf" (UID: "6ea9f861-9877-480f-a490-08c80d2580cf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.541458 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-config-data" (OuterVolumeSpecName: "config-data") pod "3d62c32c-2c57-455b-9d92-6add27e33831" (UID: "3d62c32c-2c57-455b-9d92-6add27e33831"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.542104 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d62c32c-2c57-455b-9d92-6add27e33831-kube-api-access-z2vjz" (OuterVolumeSpecName: "kube-api-access-z2vjz") pod "3d62c32c-2c57-455b-9d92-6add27e33831" (UID: "3d62c32c-2c57-455b-9d92-6add27e33831"). InnerVolumeSpecName "kube-api-access-z2vjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.542789 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d62c32c-2c57-455b-9d92-6add27e33831" (UID: "3d62c32c-2c57-455b-9d92-6add27e33831"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.549037 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6ea9f861-9877-480f-a490-08c80d2580cf" (UID: "6ea9f861-9877-480f-a490-08c80d2580cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608600 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2vjz\" (UniqueName: \"kubernetes.io/projected/3d62c32c-2c57-455b-9d92-6add27e33831-kube-api-access-z2vjz\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608631 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608641 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608650 4970 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6ea9f861-9877-480f-a490-08c80d2580cf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608659 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608669 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8xnm\" (UniqueName: \"kubernetes.io/projected/6ea9f861-9877-480f-a490-08c80d2580cf-kube-api-access-x8xnm\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608677 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3d62c32c-2c57-455b-9d92-6add27e33831-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:44 crc kubenswrapper[4970]: I0930 10:03:44.608686 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3d62c32c-2c57-455b-9d92-6add27e33831-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.328310 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-zdzz8" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.328336 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-2n6k8" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.551570 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-8cc9569d-ll5d9"] Sep 30 10:03:45 crc kubenswrapper[4970]: E0930 10:03:45.553477 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ea9f861-9877-480f-a490-08c80d2580cf" containerName="barbican-db-sync" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.553510 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ea9f861-9877-480f-a490-08c80d2580cf" containerName="barbican-db-sync" Sep 30 10:03:45 crc kubenswrapper[4970]: E0930 10:03:45.553551 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d62c32c-2c57-455b-9d92-6add27e33831" containerName="placement-db-sync" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.553560 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d62c32c-2c57-455b-9d92-6add27e33831" containerName="placement-db-sync" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.553861 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d62c32c-2c57-455b-9d92-6add27e33831" containerName="placement-db-sync" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.553882 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ea9f861-9877-480f-a490-08c80d2580cf" containerName="barbican-db-sync" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.555205 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.557343 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.557420 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.566452 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-k9dtb" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.566685 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.567998 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.569672 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8cc9569d-ll5d9"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.647654 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-767b995857-mf5zx"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.649118 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.656197 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.656440 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-x9x8z" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.656724 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.690056 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-767b995857-mf5zx"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.726802 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-56bb9f86b-7tl55"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.728754 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729264 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-config-data\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729329 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-combined-ca-bundle\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729395 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-scripts\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729431 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5qps\" (UniqueName: \"kubernetes.io/projected/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-kube-api-access-r5qps\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729471 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-logs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729527 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-internal-tls-certs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.729571 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-public-tls-certs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.730871 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.811891 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56bb9f86b-7tl55"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.825069 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-dtc2p"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.826883 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.831912 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-internal-tls-certs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.832315 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-public-tls-certs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.832454 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4c15f-9170-47c3-9716-919828e0cb40-logs\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.832547 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-config-data\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.834852 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-combined-ca-bundle\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.834922 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-config-data-custom\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835041 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb32bf8e-e046-4e85-87b2-56993b0e6e30-logs\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835096 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-combined-ca-bundle\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835141 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-config-data-custom\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835185 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-combined-ca-bundle\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835254 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-scripts\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835276 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-config-data\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835355 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5qps\" (UniqueName: \"kubernetes.io/projected/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-kube-api-access-r5qps\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835465 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-logs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835529 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpcvv\" (UniqueName: \"kubernetes.io/projected/cb32bf8e-e046-4e85-87b2-56993b0e6e30-kube-api-access-qpcvv\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.835886 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-config-data\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.836027 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cs6\" (UniqueName: \"kubernetes.io/projected/f4d4c15f-9170-47c3-9716-919828e0cb40-kube-api-access-m8cs6\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.844038 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-scripts\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.844491 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-logs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.847537 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-config-data\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.849060 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-internal-tls-certs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.855618 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-public-tls-certs\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.860407 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-combined-ca-bundle\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.872769 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5qps\" (UniqueName: \"kubernetes.io/projected/16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890-kube-api-access-r5qps\") pod \"placement-8cc9569d-ll5d9\" (UID: \"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890\") " pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.885927 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.887437 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-dtc2p"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937709 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937745 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937785 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-config-data-custom\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937807 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937858 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb32bf8e-e046-4e85-87b2-56993b0e6e30-logs\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937876 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-combined-ca-bundle\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937895 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-combined-ca-bundle\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937913 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-config-data-custom\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937941 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-config\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.937965 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-config-data\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.938021 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpcvv\" (UniqueName: \"kubernetes.io/projected/cb32bf8e-e046-4e85-87b2-56993b0e6e30-kube-api-access-qpcvv\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.938039 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-config-data\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.938061 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8cs6\" (UniqueName: \"kubernetes.io/projected/f4d4c15f-9170-47c3-9716-919828e0cb40-kube-api-access-m8cs6\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.938087 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.938142 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4c15f-9170-47c3-9716-919828e0cb40-logs\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.938166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8qc\" (UniqueName: \"kubernetes.io/projected/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-kube-api-access-fd8qc\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.939911 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cb32bf8e-e046-4e85-87b2-56993b0e6e30-logs\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.946447 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4d4c15f-9170-47c3-9716-919828e0cb40-logs\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.950046 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-config-data-custom\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.970374 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-combined-ca-bundle\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.970457 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6d9bbff474-2259d"] Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.971841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpcvv\" (UniqueName: \"kubernetes.io/projected/cb32bf8e-e046-4e85-87b2-56993b0e6e30-kube-api-access-qpcvv\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.974068 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-combined-ca-bundle\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.985452 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-config-data-custom\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:45 crc kubenswrapper[4970]: I0930 10:03:45.986299 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb32bf8e-e046-4e85-87b2-56993b0e6e30-config-data\") pod \"barbican-keystone-listener-56bb9f86b-7tl55\" (UID: \"cb32bf8e-e046-4e85-87b2-56993b0e6e30\") " pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:45.990474 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4d4c15f-9170-47c3-9716-919828e0cb40-config-data\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:45.996149 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.007022 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8cs6\" (UniqueName: \"kubernetes.io/projected/f4d4c15f-9170-47c3-9716-919828e0cb40-kube-api-access-m8cs6\") pod \"barbican-worker-767b995857-mf5zx\" (UID: \"f4d4c15f-9170-47c3-9716-919828e0cb40\") " pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.018369 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.042058 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d9bbff474-2259d"] Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.077518 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-config\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.076056 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-config\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.078190 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.078495 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8qc\" (UniqueName: \"kubernetes.io/projected/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-kube-api-access-fd8qc\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.078543 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.078566 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.078609 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.078848 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-swift-storage-0\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.079312 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-nb\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.079503 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-svc\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.079867 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-sb\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.084063 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.097894 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8qc\" (UniqueName: \"kubernetes.io/projected/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-kube-api-access-fd8qc\") pod \"dnsmasq-dns-6d66f584d7-dtc2p\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.181166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8172f08-1f09-4a64-ad70-12b06b9cd244-logs\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.181265 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsmsw\" (UniqueName: \"kubernetes.io/projected/d8172f08-1f09-4a64-ad70-12b06b9cd244-kube-api-access-xsmsw\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.181351 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-combined-ca-bundle\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.181431 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data-custom\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.181459 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.283220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data-custom\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.283324 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.283458 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8172f08-1f09-4a64-ad70-12b06b9cd244-logs\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.283504 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsmsw\" (UniqueName: \"kubernetes.io/projected/d8172f08-1f09-4a64-ad70-12b06b9cd244-kube-api-access-xsmsw\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.283587 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-combined-ca-bundle\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.284006 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8172f08-1f09-4a64-ad70-12b06b9cd244-logs\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.287463 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data-custom\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.288731 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-combined-ca-bundle\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.288866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.301320 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-767b995857-mf5zx" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.302079 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsmsw\" (UniqueName: \"kubernetes.io/projected/d8172f08-1f09-4a64-ad70-12b06b9cd244-kube-api-access-xsmsw\") pod \"barbican-api-6d9bbff474-2259d\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.356174 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:46 crc kubenswrapper[4970]: I0930 10:03:46.366789 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:47 crc kubenswrapper[4970]: E0930 10:03:47.078290 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.134223 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-8cc9569d-ll5d9"] Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.241927 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.297372 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-8fbb4f9c8-n8t5n" podUID="8b479413-73f2-4159-8ec6-5e23f139c53c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.314404 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-56bb9f86b-7tl55"] Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.326226 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-767b995857-mf5zx"] Sep 30 10:03:47 crc kubenswrapper[4970]: W0930 10:03:47.342763 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4d4c15f_9170_47c3_9716_919828e0cb40.slice/crio-769b52303b5948a20d2c4175a38b01c796e23952ffad3c255c777740d3c57d11 WatchSource:0}: Error finding container 769b52303b5948a20d2c4175a38b01c796e23952ffad3c255c777740d3c57d11: Status 404 returned error can't find the container with id 769b52303b5948a20d2c4175a38b01c796e23952ffad3c255c777740d3c57d11 Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.360224 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc9569d-ll5d9" event={"ID":"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890","Type":"ContainerStarted","Data":"5c4d502a3b78bdef8319979bcffe05bf23f3aa389e29a95a429e910941ddba93"} Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.373940 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerStarted","Data":"757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c"} Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.374178 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="ceilometer-notification-agent" containerID="cri-o://517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0" gracePeriod=30 Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.374445 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.375100 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="proxy-httpd" containerID="cri-o://757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c" gracePeriod=30 Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.375166 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="sg-core" containerID="cri-o://5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be" gracePeriod=30 Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.559349 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-dtc2p"] Sep 30 10:03:47 crc kubenswrapper[4970]: I0930 10:03:47.566655 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6d9bbff474-2259d"] Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.400065 4970 generic.go:334] "Generic (PLEG): container finished" podID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerID="896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038" exitCode=0 Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.401159 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" event={"ID":"d0dde2f5-0cbd-4d33-b570-d083b9abbf57","Type":"ContainerDied","Data":"896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.401237 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" event={"ID":"d0dde2f5-0cbd-4d33-b570-d083b9abbf57","Type":"ContainerStarted","Data":"61bb9edb756f4a97dc9fb24dff576befce67f0b9ebd6f841a22e3145bd779d5d"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.415657 4970 generic.go:334] "Generic (PLEG): container finished" podID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" containerID="1084cb733673f7ccfee88310d49c49131ab929fb6ac87c9df6ba6704eabce6b3" exitCode=0 Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.415788 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4pw5n" event={"ID":"ad24190f-4eb6-49c8-bad6-c33a817cd9c6","Type":"ContainerDied","Data":"1084cb733673f7ccfee88310d49c49131ab929fb6ac87c9df6ba6704eabce6b3"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.457310 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc9569d-ll5d9" event={"ID":"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890","Type":"ContainerStarted","Data":"ae428c8e79c9fe72724cbafba95a51af2fe6531b700487bdebe0601c7f3cb574"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.457626 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-8cc9569d-ll5d9" event={"ID":"16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890","Type":"ContainerStarted","Data":"063b06d500c80532848a135aac7a83e7912733d3e3c1df2bb0d9d2eab3e241b3"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.458367 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.458412 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.467680 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-767b995857-mf5zx" event={"ID":"f4d4c15f-9170-47c3-9716-919828e0cb40","Type":"ContainerStarted","Data":"769b52303b5948a20d2c4175a38b01c796e23952ffad3c255c777740d3c57d11"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.488079 4970 generic.go:334] "Generic (PLEG): container finished" podID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerID="757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c" exitCode=0 Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.488121 4970 generic.go:334] "Generic (PLEG): container finished" podID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerID="5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be" exitCode=2 Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.488482 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerDied","Data":"757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.488515 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerDied","Data":"5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.500964 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" event={"ID":"cb32bf8e-e046-4e85-87b2-56993b0e6e30","Type":"ContainerStarted","Data":"e63700f50b2a3d29d70b87fddf41134b9ceae5466ead3586ee718053a137ac62"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.509793 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d9bbff474-2259d" event={"ID":"d8172f08-1f09-4a64-ad70-12b06b9cd244","Type":"ContainerStarted","Data":"22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.509876 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d9bbff474-2259d" event={"ID":"d8172f08-1f09-4a64-ad70-12b06b9cd244","Type":"ContainerStarted","Data":"d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.509887 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d9bbff474-2259d" event={"ID":"d8172f08-1f09-4a64-ad70-12b06b9cd244","Type":"ContainerStarted","Data":"78fbcb1289a22616de41945f635c0d4c94912bd25dd1ac9b1af701e022acd22b"} Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.510140 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.510242 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.527678 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-8cc9569d-ll5d9" podStartSLOduration=3.527658101 podStartE2EDuration="3.527658101s" podCreationTimestamp="2025-09-30 10:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:48.497743245 +0000 UTC m=+1041.569594179" watchObservedRunningTime="2025-09-30 10:03:48.527658101 +0000 UTC m=+1041.599509035" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.573009 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6d9bbff474-2259d" podStartSLOduration=3.572967137 podStartE2EDuration="3.572967137s" podCreationTimestamp="2025-09-30 10:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:48.535243263 +0000 UTC m=+1041.607094207" watchObservedRunningTime="2025-09-30 10:03:48.572967137 +0000 UTC m=+1041.644818061" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.654164 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-868647ddbb-dxwsf"] Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.658221 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.663058 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.664692 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.669961 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-868647ddbb-dxwsf"] Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.761706 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-config-data\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.762089 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-internal-tls-certs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.762166 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6k6t\" (UniqueName: \"kubernetes.io/projected/bbcbf5f3-02eb-4969-af25-0c219017b29a-kube-api-access-c6k6t\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.762208 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-config-data-custom\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.762337 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-public-tls-certs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.762391 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf5f3-02eb-4969-af25-0c219017b29a-logs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.763357 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-combined-ca-bundle\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867665 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-public-tls-certs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867725 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf5f3-02eb-4969-af25-0c219017b29a-logs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867807 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-combined-ca-bundle\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867830 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-config-data\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867906 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-internal-tls-certs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867933 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6k6t\" (UniqueName: \"kubernetes.io/projected/bbcbf5f3-02eb-4969-af25-0c219017b29a-kube-api-access-c6k6t\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.867953 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-config-data-custom\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.868228 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbcbf5f3-02eb-4969-af25-0c219017b29a-logs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.872500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-internal-tls-certs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.872528 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-public-tls-certs\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.873363 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-combined-ca-bundle\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.874275 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-config-data\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.889621 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6k6t\" (UniqueName: \"kubernetes.io/projected/bbcbf5f3-02eb-4969-af25-0c219017b29a-kube-api-access-c6k6t\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:48 crc kubenswrapper[4970]: I0930 10:03:48.894576 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bbcbf5f3-02eb-4969-af25-0c219017b29a-config-data-custom\") pod \"barbican-api-868647ddbb-dxwsf\" (UID: \"bbcbf5f3-02eb-4969-af25-0c219017b29a\") " pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:49 crc kubenswrapper[4970]: I0930 10:03:49.008401 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:49 crc kubenswrapper[4970]: I0930 10:03:49.555284 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" event={"ID":"d0dde2f5-0cbd-4d33-b570-d083b9abbf57","Type":"ContainerStarted","Data":"d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021"} Sep 30 10:03:49 crc kubenswrapper[4970]: I0930 10:03:49.555819 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:49 crc kubenswrapper[4970]: I0930 10:03:49.582014 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" podStartSLOduration=4.5819788169999995 podStartE2EDuration="4.581978817s" podCreationTimestamp="2025-09-30 10:03:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:49.58096637 +0000 UTC m=+1042.652817304" watchObservedRunningTime="2025-09-30 10:03:49.581978817 +0000 UTC m=+1042.653829791" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.070178 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-868647ddbb-dxwsf"] Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.299426 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.410697 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-combined-ca-bundle\") pod \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.410777 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8bmz\" (UniqueName: \"kubernetes.io/projected/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-kube-api-access-q8bmz\") pod \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.410824 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-db-sync-config-data\") pod \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.410870 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-config-data\") pod \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.410910 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-scripts\") pod \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.410982 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-etc-machine-id\") pod \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\" (UID: \"ad24190f-4eb6-49c8-bad6-c33a817cd9c6\") " Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.411587 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ad24190f-4eb6-49c8-bad6-c33a817cd9c6" (UID: "ad24190f-4eb6-49c8-bad6-c33a817cd9c6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.419166 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-scripts" (OuterVolumeSpecName: "scripts") pod "ad24190f-4eb6-49c8-bad6-c33a817cd9c6" (UID: "ad24190f-4eb6-49c8-bad6-c33a817cd9c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.421608 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ad24190f-4eb6-49c8-bad6-c33a817cd9c6" (UID: "ad24190f-4eb6-49c8-bad6-c33a817cd9c6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.427778 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-kube-api-access-q8bmz" (OuterVolumeSpecName: "kube-api-access-q8bmz") pod "ad24190f-4eb6-49c8-bad6-c33a817cd9c6" (UID: "ad24190f-4eb6-49c8-bad6-c33a817cd9c6"). InnerVolumeSpecName "kube-api-access-q8bmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.513972 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.514093 4970 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.514104 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8bmz\" (UniqueName: \"kubernetes.io/projected/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-kube-api-access-q8bmz\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.514113 4970 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.547191 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ad24190f-4eb6-49c8-bad6-c33a817cd9c6" (UID: "ad24190f-4eb6-49c8-bad6-c33a817cd9c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.558246 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-config-data" (OuterVolumeSpecName: "config-data") pod "ad24190f-4eb6-49c8-bad6-c33a817cd9c6" (UID: "ad24190f-4eb6-49c8-bad6-c33a817cd9c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.565264 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4pw5n" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.565273 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4pw5n" event={"ID":"ad24190f-4eb6-49c8-bad6-c33a817cd9c6","Type":"ContainerDied","Data":"45a846fdd5d2c8a9ea0456d0cb9a00374260e6505689729e4384f5abc1a9f1ae"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.565656 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a846fdd5d2c8a9ea0456d0cb9a00374260e6505689729e4384f5abc1a9f1ae" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.567136 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-767b995857-mf5zx" event={"ID":"f4d4c15f-9170-47c3-9716-919828e0cb40","Type":"ContainerStarted","Data":"9745d3f7bec2f54993bbdb61bab53c77633868aa6756e72d7856d3741e467e83"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.567180 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-767b995857-mf5zx" event={"ID":"f4d4c15f-9170-47c3-9716-919828e0cb40","Type":"ContainerStarted","Data":"aa862188d954e2d8efafe0bd78643a23eff07074d5ade58d33354995290c76f3"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.569941 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-868647ddbb-dxwsf" event={"ID":"bbcbf5f3-02eb-4969-af25-0c219017b29a","Type":"ContainerStarted","Data":"6de6fbdb6fe2757f25f9fd3bfc5e0d27d7cd51b916127db0235d8412591d6b0a"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.569969 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-868647ddbb-dxwsf" event={"ID":"bbcbf5f3-02eb-4969-af25-0c219017b29a","Type":"ContainerStarted","Data":"99f9dd9d0fb35110dcc53b7d80efeb4d35ad80beebdc3eede4a548a22f0cd56d"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.572311 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" event={"ID":"cb32bf8e-e046-4e85-87b2-56993b0e6e30","Type":"ContainerStarted","Data":"a996b1933342382136403b6930aea7a45e2fb36924f6d2a3c0e209c995f63acb"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.572360 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" event={"ID":"cb32bf8e-e046-4e85-87b2-56993b0e6e30","Type":"ContainerStarted","Data":"2dfa11116e24edf9b1b1ea931e4798754c897f4e7b222a283dcfdfebc98392dc"} Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.588807 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-767b995857-mf5zx" podStartSLOduration=3.47836769 podStartE2EDuration="5.588788729s" podCreationTimestamp="2025-09-30 10:03:45 +0000 UTC" firstStartedPulling="2025-09-30 10:03:47.352803308 +0000 UTC m=+1040.424654242" lastFinishedPulling="2025-09-30 10:03:49.463224347 +0000 UTC m=+1042.535075281" observedRunningTime="2025-09-30 10:03:50.58355341 +0000 UTC m=+1043.655404344" watchObservedRunningTime="2025-09-30 10:03:50.588788729 +0000 UTC m=+1043.660639663" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.615377 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.615404 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad24190f-4eb6-49c8-bad6-c33a817cd9c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.626338 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-56bb9f86b-7tl55" podStartSLOduration=3.51146119 podStartE2EDuration="5.626293287s" podCreationTimestamp="2025-09-30 10:03:45 +0000 UTC" firstStartedPulling="2025-09-30 10:03:47.352182011 +0000 UTC m=+1040.424032945" lastFinishedPulling="2025-09-30 10:03:49.467014108 +0000 UTC m=+1042.538865042" observedRunningTime="2025-09-30 10:03:50.60010137 +0000 UTC m=+1043.671952304" watchObservedRunningTime="2025-09-30 10:03:50.626293287 +0000 UTC m=+1043.698144221" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.708213 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:03:50 crc kubenswrapper[4970]: E0930 10:03:50.708830 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" containerName="cinder-db-sync" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.708853 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" containerName="cinder-db-sync" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.709136 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" containerName="cinder-db-sync" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.712361 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.725903 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.727735 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bfskb" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.727981 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.728121 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.735135 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.818464 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-dtc2p"] Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.832820 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.832903 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.832930 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.832951 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.833075 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/711ac5a4-f629-436a-9cc4-2e7003919a5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.833214 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvstw\" (UniqueName: \"kubernetes.io/projected/711ac5a4-f629-436a-9cc4-2e7003919a5b-kube-api-access-fvstw\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.915107 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-xrhdk"] Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.923559 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.933666 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-xrhdk"] Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.940939 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.941119 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.941170 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.941185 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.941409 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/711ac5a4-f629-436a-9cc4-2e7003919a5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.941714 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvstw\" (UniqueName: \"kubernetes.io/projected/711ac5a4-f629-436a-9cc4-2e7003919a5b-kube-api-access-fvstw\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.949419 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/711ac5a4-f629-436a-9cc4-2e7003919a5b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.950830 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.950895 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.958777 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.959948 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-scripts\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:50 crc kubenswrapper[4970]: I0930 10:03:50.993389 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvstw\" (UniqueName: \"kubernetes.io/projected/711ac5a4-f629-436a-9cc4-2e7003919a5b-kube-api-access-fvstw\") pod \"cinder-scheduler-0\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " pod="openstack/cinder-scheduler-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.032029 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.034226 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.037888 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.044544 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.044624 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-config\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.044674 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-svc\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.044694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.044807 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825mj\" (UniqueName: \"kubernetes.io/projected/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-kube-api-access-825mj\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.044883 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.062023 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.066293 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148620 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data-custom\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148688 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148735 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148764 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148801 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-config\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148827 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-svc\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148846 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148889 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87888625-1e32-4c24-b999-7bd8df2ac975-logs\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148937 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87888625-1e32-4c24-b999-7bd8df2ac975-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148958 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnxz7\" (UniqueName: \"kubernetes.io/projected/87888625-1e32-4c24-b999-7bd8df2ac975-kube-api-access-vnxz7\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.148973 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-scripts\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.149006 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-825mj\" (UniqueName: \"kubernetes.io/projected/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-kube-api-access-825mj\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.149050 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.150241 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-sb\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.155231 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-swift-storage-0\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.156058 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-config\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.156637 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-svc\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.157159 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-nb\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.195003 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-825mj\" (UniqueName: \"kubernetes.io/projected/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-kube-api-access-825mj\") pod \"dnsmasq-dns-674b76c99f-xrhdk\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252167 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87888625-1e32-4c24-b999-7bd8df2ac975-logs\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252238 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87888625-1e32-4c24-b999-7bd8df2ac975-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252255 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnxz7\" (UniqueName: \"kubernetes.io/projected/87888625-1e32-4c24-b999-7bd8df2ac975-kube-api-access-vnxz7\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252271 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-scripts\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252310 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252341 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data-custom\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.252381 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.253292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87888625-1e32-4c24-b999-7bd8df2ac975-etc-machine-id\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.254975 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.256893 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.257318 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87888625-1e32-4c24-b999-7bd8df2ac975-logs\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.257823 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-scripts\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.262689 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data-custom\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.278125 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.292697 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnxz7\" (UniqueName: \"kubernetes.io/projected/87888625-1e32-4c24-b999-7bd8df2ac975-kube-api-access-vnxz7\") pod \"cinder-api-0\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.309589 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.428617 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456157 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-log-httpd\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456248 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nptcn\" (UniqueName: \"kubernetes.io/projected/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-kube-api-access-nptcn\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456383 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-combined-ca-bundle\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456413 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-scripts\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456439 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-config-data\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456467 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-sg-core-conf-yaml\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.456503 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-run-httpd\") pod \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\" (UID: \"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229\") " Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.457336 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.457627 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.461354 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-kube-api-access-nptcn" (OuterVolumeSpecName: "kube-api-access-nptcn") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "kube-api-access-nptcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.468220 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-scripts" (OuterVolumeSpecName: "scripts") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.509262 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.549242 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.550392 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-config-data" (OuterVolumeSpecName: "config-data") pod "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" (UID: "0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559438 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559562 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559572 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559584 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559592 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559599 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.559609 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nptcn\" (UniqueName: \"kubernetes.io/projected/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229-kube-api-access-nptcn\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.612685 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-868647ddbb-dxwsf" event={"ID":"bbcbf5f3-02eb-4969-af25-0c219017b29a","Type":"ContainerStarted","Data":"61b8012305335882108b9c7608e8a6c253d0c78b3e2b22bc196fc35e78e4613c"} Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.613834 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.613859 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.647864 4970 generic.go:334] "Generic (PLEG): container finished" podID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerID="517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0" exitCode=0 Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.648735 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.650235 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerName="dnsmasq-dns" containerID="cri-o://d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021" gracePeriod=10 Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.651197 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerDied","Data":"517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0"} Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.651242 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229","Type":"ContainerDied","Data":"3cae59d521c01fc9d137e9bb113d0d152f0fe03bcddd89a847462b330831e9bb"} Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.651261 4970 scope.go:117] "RemoveContainer" containerID="757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.658200 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-868647ddbb-dxwsf" podStartSLOduration=3.658179605 podStartE2EDuration="3.658179605s" podCreationTimestamp="2025-09-30 10:03:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:51.654353703 +0000 UTC m=+1044.726204637" watchObservedRunningTime="2025-09-30 10:03:51.658179605 +0000 UTC m=+1044.730030539" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.808066 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.835736 4970 scope.go:117] "RemoveContainer" containerID="5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.885392 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.901285 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: W0930 10:03:51.902114 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87888625_1e32_4c24_b999_7bd8df2ac975.slice/crio-0b5eba9e3966168db66e752fa27feae5f3e6b33f2109a711835abbad49212159 WatchSource:0}: Error finding container 0b5eba9e3966168db66e752fa27feae5f3e6b33f2109a711835abbad49212159: Status 404 returned error can't find the container with id 0b5eba9e3966168db66e752fa27feae5f3e6b33f2109a711835abbad49212159 Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.908945 4970 scope.go:117] "RemoveContainer" containerID="517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.912295 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930067 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: E0930 10:03:51.930447 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="proxy-httpd" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930462 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="proxy-httpd" Sep 30 10:03:51 crc kubenswrapper[4970]: E0930 10:03:51.930489 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="sg-core" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930497 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="sg-core" Sep 30 10:03:51 crc kubenswrapper[4970]: E0930 10:03:51.930515 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="ceilometer-notification-agent" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930523 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="ceilometer-notification-agent" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930709 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="sg-core" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930729 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="ceilometer-notification-agent" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.930746 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" containerName="proxy-httpd" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.932731 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.940430 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.944346 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.950456 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.952109 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-xrhdk"] Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.970119 4970 scope.go:117] "RemoveContainer" containerID="757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c" Sep 30 10:03:51 crc kubenswrapper[4970]: E0930 10:03:51.971442 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c\": container with ID starting with 757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c not found: ID does not exist" containerID="757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.971499 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c"} err="failed to get container status \"757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c\": rpc error: code = NotFound desc = could not find container \"757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c\": container with ID starting with 757fddea072e17ff7cc6e6f71b338b4b550d6dc07fb60652ca377b842273815c not found: ID does not exist" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.971531 4970 scope.go:117] "RemoveContainer" containerID="5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be" Sep 30 10:03:51 crc kubenswrapper[4970]: E0930 10:03:51.972225 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be\": container with ID starting with 5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be not found: ID does not exist" containerID="5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.972249 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be"} err="failed to get container status \"5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be\": rpc error: code = NotFound desc = could not find container \"5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be\": container with ID starting with 5fa13ed3dbc77d4cba9e945f4a6d05315964b51c34e585e5ec948a7bebd190be not found: ID does not exist" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.972262 4970 scope.go:117] "RemoveContainer" containerID="517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0" Sep 30 10:03:51 crc kubenswrapper[4970]: E0930 10:03:51.978487 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0\": container with ID starting with 517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0 not found: ID does not exist" containerID="517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0" Sep 30 10:03:51 crc kubenswrapper[4970]: I0930 10:03:51.978542 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0"} err="failed to get container status \"517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0\": rpc error: code = NotFound desc = could not find container \"517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0\": container with ID starting with 517f45284bebef8bf828a2ea83553392a2f8271abf7c467da7ab71eaeaafacd0 not found: ID does not exist" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.084008 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-config-data\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.084676 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7k57\" (UniqueName: \"kubernetes.io/projected/9eb0de84-58b0-45c0-9aff-7b2a82d73079-kube-api-access-h7k57\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.084717 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-run-httpd\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.084775 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-log-httpd\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.084814 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.084933 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-scripts\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.085009 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.194656 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7k57\" (UniqueName: \"kubernetes.io/projected/9eb0de84-58b0-45c0-9aff-7b2a82d73079-kube-api-access-h7k57\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196109 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-run-httpd\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196255 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-log-httpd\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196356 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196476 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-run-httpd\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196483 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-scripts\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196545 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196594 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-log-httpd\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.196662 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-config-data\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.201348 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.205760 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-config-data\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.209056 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.210176 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-scripts\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.219417 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7k57\" (UniqueName: \"kubernetes.io/projected/9eb0de84-58b0-45c0-9aff-7b2a82d73079-kube-api-access-h7k57\") pod \"ceilometer-0\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.364446 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.492604 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.502505 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-swift-storage-0\") pod \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.502626 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-nb\") pod \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.503263 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-svc\") pod \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.503437 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-sb\") pod \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.503483 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-config\") pod \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.503512 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd8qc\" (UniqueName: \"kubernetes.io/projected/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-kube-api-access-fd8qc\") pod \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\" (UID: \"d0dde2f5-0cbd-4d33-b570-d083b9abbf57\") " Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.510318 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-kube-api-access-fd8qc" (OuterVolumeSpecName: "kube-api-access-fd8qc") pod "d0dde2f5-0cbd-4d33-b570-d083b9abbf57" (UID: "d0dde2f5-0cbd-4d33-b570-d083b9abbf57"). InnerVolumeSpecName "kube-api-access-fd8qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.567427 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-config" (OuterVolumeSpecName: "config") pod "d0dde2f5-0cbd-4d33-b570-d083b9abbf57" (UID: "d0dde2f5-0cbd-4d33-b570-d083b9abbf57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.608367 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.608397 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd8qc\" (UniqueName: \"kubernetes.io/projected/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-kube-api-access-fd8qc\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.615316 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d0dde2f5-0cbd-4d33-b570-d083b9abbf57" (UID: "d0dde2f5-0cbd-4d33-b570-d083b9abbf57"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.631730 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d0dde2f5-0cbd-4d33-b570-d083b9abbf57" (UID: "d0dde2f5-0cbd-4d33-b570-d083b9abbf57"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.634872 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d0dde2f5-0cbd-4d33-b570-d083b9abbf57" (UID: "d0dde2f5-0cbd-4d33-b570-d083b9abbf57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.655767 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d0dde2f5-0cbd-4d33-b570-d083b9abbf57" (UID: "d0dde2f5-0cbd-4d33-b570-d083b9abbf57"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.689245 4970 generic.go:334] "Generic (PLEG): container finished" podID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerID="6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945" exitCode=0 Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.689343 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" event={"ID":"2185d7d0-bf11-4d98-87e7-bbd6f76f385e","Type":"ContainerDied","Data":"6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945"} Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.689401 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" event={"ID":"2185d7d0-bf11-4d98-87e7-bbd6f76f385e","Type":"ContainerStarted","Data":"fec32d710a0eed5efb5fd83ca86f83884d6f4681223d41298e1c994cccb224c5"} Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.702480 4970 generic.go:334] "Generic (PLEG): container finished" podID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerID="d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021" exitCode=0 Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.702684 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.703752 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" event={"ID":"d0dde2f5-0cbd-4d33-b570-d083b9abbf57","Type":"ContainerDied","Data":"d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021"} Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.703830 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d66f584d7-dtc2p" event={"ID":"d0dde2f5-0cbd-4d33-b570-d083b9abbf57","Type":"ContainerDied","Data":"61bb9edb756f4a97dc9fb24dff576befce67f0b9ebd6f841a22e3145bd779d5d"} Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.703857 4970 scope.go:117] "RemoveContainer" containerID="d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.709965 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.710020 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.710034 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.710049 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d0dde2f5-0cbd-4d33-b570-d083b9abbf57-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.715356 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"711ac5a4-f629-436a-9cc4-2e7003919a5b","Type":"ContainerStarted","Data":"b618965db6285da49280d7d7b0249714271866844514b6af7d63625fdb934f7c"} Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.729673 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87888625-1e32-4c24-b999-7bd8df2ac975","Type":"ContainerStarted","Data":"0b5eba9e3966168db66e752fa27feae5f3e6b33f2109a711835abbad49212159"} Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.759073 4970 scope.go:117] "RemoveContainer" containerID="896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.813217 4970 scope.go:117] "RemoveContainer" containerID="d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021" Sep 30 10:03:52 crc kubenswrapper[4970]: E0930 10:03:52.821789 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021\": container with ID starting with d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021 not found: ID does not exist" containerID="d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.821831 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021"} err="failed to get container status \"d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021\": rpc error: code = NotFound desc = could not find container \"d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021\": container with ID starting with d08daf9da87f3e17f0ccb01a3216e0366d159a503cc1bf074b5e9484b6237021 not found: ID does not exist" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.821883 4970 scope.go:117] "RemoveContainer" containerID="896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038" Sep 30 10:03:52 crc kubenswrapper[4970]: E0930 10:03:52.822284 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038\": container with ID starting with 896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038 not found: ID does not exist" containerID="896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.822324 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038"} err="failed to get container status \"896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038\": rpc error: code = NotFound desc = could not find container \"896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038\": container with ID starting with 896ea74051ca8364c6d4df04ed0fd6761601ab176fcb1f4cf1d0fd745a6f6038 not found: ID does not exist" Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.830280 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-dtc2p"] Sep 30 10:03:52 crc kubenswrapper[4970]: I0930 10:03:52.859209 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d66f584d7-dtc2p"] Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.159343 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.556197 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.679893 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229" path="/var/lib/kubelet/pods/0ef0a5d8-fef2-4fcd-afd6-279a5ad2e229/volumes" Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.681074 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" path="/var/lib/kubelet/pods/d0dde2f5-0cbd-4d33-b570-d083b9abbf57/volumes" Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.758745 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"711ac5a4-f629-436a-9cc4-2e7003919a5b","Type":"ContainerStarted","Data":"80adca25dff044014a98debe787a64f4295607628a2f68a030f0a63963a9ea48"} Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.761859 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87888625-1e32-4c24-b999-7bd8df2ac975","Type":"ContainerStarted","Data":"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f"} Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.763292 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerStarted","Data":"85e0f9c09d86b13337bf2d00a50c4e393f67012306569700224b6fa5c37f3f56"} Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.765407 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" event={"ID":"2185d7d0-bf11-4d98-87e7-bbd6f76f385e","Type":"ContainerStarted","Data":"6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5"} Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.765486 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:53 crc kubenswrapper[4970]: I0930 10:03:53.800261 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" podStartSLOduration=3.800237166 podStartE2EDuration="3.800237166s" podCreationTimestamp="2025-09-30 10:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:53.78759987 +0000 UTC m=+1046.859450804" watchObservedRunningTime="2025-09-30 10:03:53.800237166 +0000 UTC m=+1046.872088100" Sep 30 10:03:54 crc kubenswrapper[4970]: E0930 10:03:54.026748 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d\": RecentStats: unable to find data in memory cache]" Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.787134 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerStarted","Data":"b9009d88c7ee0430a93c6bd87022a31e376e616212194813caa0d493a547d591"} Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.807362 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"711ac5a4-f629-436a-9cc4-2e7003919a5b","Type":"ContainerStarted","Data":"b96cfd4d2eddd1faa5f6cdb248bf5fa76ce3d0c6b60f5f1b59d7b0ed64e00874"} Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.838040 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87888625-1e32-4c24-b999-7bd8df2ac975","Type":"ContainerStarted","Data":"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb"} Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.838200 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api-log" containerID="cri-o://bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f" gracePeriod=30 Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.838483 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.838645 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api" containerID="cri-o://8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb" gracePeriod=30 Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.839695 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.913949561 podStartE2EDuration="4.839681045s" podCreationTimestamp="2025-09-30 10:03:50 +0000 UTC" firstStartedPulling="2025-09-30 10:03:51.835881004 +0000 UTC m=+1044.907731938" lastFinishedPulling="2025-09-30 10:03:52.761612488 +0000 UTC m=+1045.833463422" observedRunningTime="2025-09-30 10:03:54.834659861 +0000 UTC m=+1047.906510795" watchObservedRunningTime="2025-09-30 10:03:54.839681045 +0000 UTC m=+1047.911531979" Sep 30 10:03:54 crc kubenswrapper[4970]: I0930 10:03:54.865894 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.865877712 podStartE2EDuration="4.865877712s" podCreationTimestamp="2025-09-30 10:03:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:03:54.862675437 +0000 UTC m=+1047.934526371" watchObservedRunningTime="2025-09-30 10:03:54.865877712 +0000 UTC m=+1047.937728646" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.569802 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.697539 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703289 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87888625-1e32-4c24-b999-7bd8df2ac975-etc-machine-id\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703492 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/87888625-1e32-4c24-b999-7bd8df2ac975-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703570 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87888625-1e32-4c24-b999-7bd8df2ac975-logs\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703648 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-combined-ca-bundle\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703773 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-scripts\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703815 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnxz7\" (UniqueName: \"kubernetes.io/projected/87888625-1e32-4c24-b999-7bd8df2ac975-kube-api-access-vnxz7\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.703911 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data-custom\") pod \"87888625-1e32-4c24-b999-7bd8df2ac975\" (UID: \"87888625-1e32-4c24-b999-7bd8df2ac975\") " Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.704944 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87888625-1e32-4c24-b999-7bd8df2ac975-logs" (OuterVolumeSpecName: "logs") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.706192 4970 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/87888625-1e32-4c24-b999-7bd8df2ac975-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.706220 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/87888625-1e32-4c24-b999-7bd8df2ac975-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.717425 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87888625-1e32-4c24-b999-7bd8df2ac975-kube-api-access-vnxz7" (OuterVolumeSpecName: "kube-api-access-vnxz7") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "kube-api-access-vnxz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.719269 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-scripts" (OuterVolumeSpecName: "scripts") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.732770 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.771761 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.805174 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data" (OuterVolumeSpecName: "config-data") pod "87888625-1e32-4c24-b999-7bd8df2ac975" (UID: "87888625-1e32-4c24-b999-7bd8df2ac975"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.809845 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.809869 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnxz7\" (UniqueName: \"kubernetes.io/projected/87888625-1e32-4c24-b999-7bd8df2ac975-kube-api-access-vnxz7\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.809883 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.809893 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.809902 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87888625-1e32-4c24-b999-7bd8df2ac975-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.848473 4970 generic.go:334] "Generic (PLEG): container finished" podID="ab2e1257-9947-41e6-8c2e-a366f4ea4c47" containerID="214925743b47809ebb0e3c09c776ecd30126704a55bd0862df1d1b312c361c57" exitCode=0 Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.848645 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ll2vt" event={"ID":"ab2e1257-9947-41e6-8c2e-a366f4ea4c47","Type":"ContainerDied","Data":"214925743b47809ebb0e3c09c776ecd30126704a55bd0862df1d1b312c361c57"} Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.851836 4970 generic.go:334] "Generic (PLEG): container finished" podID="87888625-1e32-4c24-b999-7bd8df2ac975" containerID="8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb" exitCode=0 Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.851883 4970 generic.go:334] "Generic (PLEG): container finished" podID="87888625-1e32-4c24-b999-7bd8df2ac975" containerID="bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f" exitCode=143 Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.851928 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87888625-1e32-4c24-b999-7bd8df2ac975","Type":"ContainerDied","Data":"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb"} Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.852024 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87888625-1e32-4c24-b999-7bd8df2ac975","Type":"ContainerDied","Data":"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f"} Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.852044 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"87888625-1e32-4c24-b999-7bd8df2ac975","Type":"ContainerDied","Data":"0b5eba9e3966168db66e752fa27feae5f3e6b33f2109a711835abbad49212159"} Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.852028 4970 scope.go:117] "RemoveContainer" containerID="8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.852050 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.859741 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerStarted","Data":"cc73d4cfcc8b59cb9d39c7ff5c1e649b070263868227c4cb47574f82d8d9577e"} Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.860294 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerStarted","Data":"dc4adcdf28c7d1fecb7cb582dd272479236654301e9a9826055c3ed1073d8cdf"} Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.898863 4970 scope.go:117] "RemoveContainer" containerID="bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.931554 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.960086 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.961396 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:55 crc kubenswrapper[4970]: E0930 10:03:55.961836 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerName="init" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.961855 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerName="init" Sep 30 10:03:55 crc kubenswrapper[4970]: E0930 10:03:55.961873 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api-log" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.961882 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api-log" Sep 30 10:03:55 crc kubenswrapper[4970]: E0930 10:03:55.961899 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.961906 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api" Sep 30 10:03:55 crc kubenswrapper[4970]: E0930 10:03:55.961931 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerName="dnsmasq-dns" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.961937 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerName="dnsmasq-dns" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.962118 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.962135 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0dde2f5-0cbd-4d33-b570-d083b9abbf57" containerName="dnsmasq-dns" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.962154 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" containerName="cinder-api-log" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.965021 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.972249 4970 scope.go:117] "RemoveContainer" containerID="8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.973565 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.973815 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.973952 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Sep 30 10:03:55 crc kubenswrapper[4970]: E0930 10:03:55.975551 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb\": container with ID starting with 8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb not found: ID does not exist" containerID="8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.975598 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb"} err="failed to get container status \"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb\": rpc error: code = NotFound desc = could not find container \"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb\": container with ID starting with 8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb not found: ID does not exist" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.975630 4970 scope.go:117] "RemoveContainer" containerID="bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f" Sep 30 10:03:55 crc kubenswrapper[4970]: E0930 10:03:55.975963 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f\": container with ID starting with bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f not found: ID does not exist" containerID="bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.976014 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f"} err="failed to get container status \"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f\": rpc error: code = NotFound desc = could not find container \"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f\": container with ID starting with bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f not found: ID does not exist" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.976037 4970 scope.go:117] "RemoveContainer" containerID="8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.978331 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb"} err="failed to get container status \"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb\": rpc error: code = NotFound desc = could not find container \"8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb\": container with ID starting with 8c81ed916a83db08fb7319cc6265145d87ba808f3064cf10c547c24b96a295fb not found: ID does not exist" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.978353 4970 scope.go:117] "RemoveContainer" containerID="bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.979944 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f"} err="failed to get container status \"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f\": rpc error: code = NotFound desc = could not find container \"bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f\": container with ID starting with bbb78caa4b4c24efb909b2febb79f6363da8501a4ad954ba675f45cc732fb08f not found: ID does not exist" Sep 30 10:03:55 crc kubenswrapper[4970]: I0930 10:03:55.987382 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.062659 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.118191 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.118265 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb0312b9-337a-4175-ae77-cd4964578d13-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.118606 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.118742 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0312b9-337a-4175-ae77-cd4964578d13-logs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.118806 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrn4w\" (UniqueName: \"kubernetes.io/projected/bb0312b9-337a-4175-ae77-cd4964578d13-kube-api-access-nrn4w\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.118932 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.119036 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-scripts\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.119085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-config-data\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.119249 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221485 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221598 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221651 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb0312b9-337a-4175-ae77-cd4964578d13-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221788 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0312b9-337a-4175-ae77-cd4964578d13-logs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221826 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrn4w\" (UniqueName: \"kubernetes.io/projected/bb0312b9-337a-4175-ae77-cd4964578d13-kube-api-access-nrn4w\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221835 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb0312b9-337a-4175-ae77-cd4964578d13-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221886 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221925 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-scripts\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.221955 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-config-data\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.222417 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb0312b9-337a-4175-ae77-cd4964578d13-logs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.228107 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-scripts\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.228262 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.229381 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-config-data\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.234673 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.235256 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.237719 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb0312b9-337a-4175-ae77-cd4964578d13-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.246431 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrn4w\" (UniqueName: \"kubernetes.io/projected/bb0312b9-337a-4175-ae77-cd4964578d13-kube-api-access-nrn4w\") pod \"cinder-api-0\" (UID: \"bb0312b9-337a-4175-ae77-cd4964578d13\") " pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.344316 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.844641 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Sep 30 10:03:56 crc kubenswrapper[4970]: W0930 10:03:56.852907 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb0312b9_337a_4175_ae77_cd4964578d13.slice/crio-192ca4513802602b57ac972d95f01df8ddd997b54c1eb50f2bff153ab7fd1e79 WatchSource:0}: Error finding container 192ca4513802602b57ac972d95f01df8ddd997b54c1eb50f2bff153ab7fd1e79: Status 404 returned error can't find the container with id 192ca4513802602b57ac972d95f01df8ddd997b54c1eb50f2bff153ab7fd1e79 Sep 30 10:03:56 crc kubenswrapper[4970]: I0930 10:03:56.880588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb0312b9-337a-4175-ae77-cd4964578d13","Type":"ContainerStarted","Data":"192ca4513802602b57ac972d95f01df8ddd997b54c1eb50f2bff153ab7fd1e79"} Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.399929 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.447777 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbxxj\" (UniqueName: \"kubernetes.io/projected/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-kube-api-access-pbxxj\") pod \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.447843 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-config\") pod \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.447898 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-combined-ca-bundle\") pod \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\" (UID: \"ab2e1257-9947-41e6-8c2e-a366f4ea4c47\") " Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.454387 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-kube-api-access-pbxxj" (OuterVolumeSpecName: "kube-api-access-pbxxj") pod "ab2e1257-9947-41e6-8c2e-a366f4ea4c47" (UID: "ab2e1257-9947-41e6-8c2e-a366f4ea4c47"). InnerVolumeSpecName "kube-api-access-pbxxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.487503 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ab2e1257-9947-41e6-8c2e-a366f4ea4c47" (UID: "ab2e1257-9947-41e6-8c2e-a366f4ea4c47"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.539604 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-config" (OuterVolumeSpecName: "config") pod "ab2e1257-9947-41e6-8c2e-a366f4ea4c47" (UID: "ab2e1257-9947-41e6-8c2e-a366f4ea4c47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.551273 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbxxj\" (UniqueName: \"kubernetes.io/projected/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-kube-api-access-pbxxj\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.551316 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.551329 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab2e1257-9947-41e6-8c2e-a366f4ea4c47-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.708425 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87888625-1e32-4c24-b999-7bd8df2ac975" path="/var/lib/kubelet/pods/87888625-1e32-4c24-b999-7bd8df2ac975/volumes" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.933623 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerStarted","Data":"127663ecbe2cc1d500f4ae987c736860c7b8cad788aca84c6e8f3cde29e83765"} Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.934598 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.939606 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb0312b9-337a-4175-ae77-cd4964578d13","Type":"ContainerStarted","Data":"0fe283bad1e8a6c58b22c8df56b73f5781b75c8a518c77239d6dcba4264f83f0"} Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.956250 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ll2vt" event={"ID":"ab2e1257-9947-41e6-8c2e-a366f4ea4c47","Type":"ContainerDied","Data":"78f618dabd9b59fa58b15c6f76cc91d1e3b3dd4a602a4b0f518cb585a548c17a"} Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.956291 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78f618dabd9b59fa58b15c6f76cc91d1e3b3dd4a602a4b0f518cb585a548c17a" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.956379 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ll2vt" Sep 30 10:03:57 crc kubenswrapper[4970]: I0930 10:03:57.963747 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.186082463 podStartE2EDuration="6.963728197s" podCreationTimestamp="2025-09-30 10:03:51 +0000 UTC" firstStartedPulling="2025-09-30 10:03:53.174174746 +0000 UTC m=+1046.246025680" lastFinishedPulling="2025-09-30 10:03:56.95182048 +0000 UTC m=+1050.023671414" observedRunningTime="2025-09-30 10:03:57.96307211 +0000 UTC m=+1051.034923064" watchObservedRunningTime="2025-09-30 10:03:57.963728197 +0000 UTC m=+1051.035579131" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.091755 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-xrhdk"] Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.092079 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerName="dnsmasq-dns" containerID="cri-o://6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5" gracePeriod=10 Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.097153 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.167126 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-lmwbr"] Sep 30 10:03:58 crc kubenswrapper[4970]: E0930 10:03:58.167825 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab2e1257-9947-41e6-8c2e-a366f4ea4c47" containerName="neutron-db-sync" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.167841 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab2e1257-9947-41e6-8c2e-a366f4ea4c47" containerName="neutron-db-sync" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.172319 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab2e1257-9947-41e6-8c2e-a366f4ea4c47" containerName="neutron-db-sync" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.173384 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.221343 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-lmwbr"] Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.259397 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6f69b67d68-jtzdb"] Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.262164 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.266719 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-sg5bl" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.266895 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.273832 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f69b67d68-jtzdb"] Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.275345 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.295585 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.297525 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.297589 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.297647 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.297676 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-config\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.297735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.297770 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdkp6\" (UniqueName: \"kubernetes.io/projected/ae142b63-43b3-488d-ab6d-327b057279b7-kube-api-access-rdkp6\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.405934 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406000 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdkp6\" (UniqueName: \"kubernetes.io/projected/ae142b63-43b3-488d-ab6d-327b057279b7-kube-api-access-rdkp6\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406057 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwgx5\" (UniqueName: \"kubernetes.io/projected/03969bb8-1309-49b6-910c-de58a7d7b3d4-kube-api-access-jwgx5\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406100 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-ovndb-tls-certs\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406170 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406198 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-config\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406243 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-combined-ca-bundle\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406277 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406294 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-config\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.406324 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-httpd-config\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.407760 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.408456 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.408911 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.409256 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-config\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.419025 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.447064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdkp6\" (UniqueName: \"kubernetes.io/projected/ae142b63-43b3-488d-ab6d-327b057279b7-kube-api-access-rdkp6\") pod \"dnsmasq-dns-6bb4fc677f-lmwbr\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.470200 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.504142 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.507924 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-ovndb-tls-certs\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.508008 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-config\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.508058 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-combined-ca-bundle\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.508120 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-httpd-config\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.508196 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwgx5\" (UniqueName: \"kubernetes.io/projected/03969bb8-1309-49b6-910c-de58a7d7b3d4-kube-api-access-jwgx5\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.520563 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-config\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.521095 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-httpd-config\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.522167 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-ovndb-tls-certs\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.534261 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-combined-ca-bundle\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.548081 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwgx5\" (UniqueName: \"kubernetes.io/projected/03969bb8-1309-49b6-910c-de58a7d7b3d4-kube-api-access-jwgx5\") pod \"neutron-6f69b67d68-jtzdb\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.620607 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.748345 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:03:58 crc kubenswrapper[4970]: I0930 10:03:58.912199 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.026830 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-825mj\" (UniqueName: \"kubernetes.io/projected/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-kube-api-access-825mj\") pod \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.026871 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-nb\") pod \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.027144 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-sb\") pod \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.027200 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-swift-storage-0\") pod \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.027259 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-config\") pod \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.027406 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-svc\") pod \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\" (UID: \"2185d7d0-bf11-4d98-87e7-bbd6f76f385e\") " Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.076732 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-kube-api-access-825mj" (OuterVolumeSpecName: "kube-api-access-825mj") pod "2185d7d0-bf11-4d98-87e7-bbd6f76f385e" (UID: "2185d7d0-bf11-4d98-87e7-bbd6f76f385e"). InnerVolumeSpecName "kube-api-access-825mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.085098 4970 generic.go:334] "Generic (PLEG): container finished" podID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerID="6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5" exitCode=0 Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.088320 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" event={"ID":"2185d7d0-bf11-4d98-87e7-bbd6f76f385e","Type":"ContainerDied","Data":"6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5"} Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.088402 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" event={"ID":"2185d7d0-bf11-4d98-87e7-bbd6f76f385e","Type":"ContainerDied","Data":"fec32d710a0eed5efb5fd83ca86f83884d6f4681223d41298e1c994cccb224c5"} Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.088430 4970 scope.go:117] "RemoveContainer" containerID="6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.088679 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-674b76c99f-xrhdk" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.137248 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-825mj\" (UniqueName: \"kubernetes.io/projected/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-kube-api-access-825mj\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.197144 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-lmwbr"] Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.199254 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2185d7d0-bf11-4d98-87e7-bbd6f76f385e" (UID: "2185d7d0-bf11-4d98-87e7-bbd6f76f385e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.208453 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2185d7d0-bf11-4d98-87e7-bbd6f76f385e" (UID: "2185d7d0-bf11-4d98-87e7-bbd6f76f385e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.213639 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-config" (OuterVolumeSpecName: "config") pod "2185d7d0-bf11-4d98-87e7-bbd6f76f385e" (UID: "2185d7d0-bf11-4d98-87e7-bbd6f76f385e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.240339 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.240373 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.240385 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.273833 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2185d7d0-bf11-4d98-87e7-bbd6f76f385e" (UID: "2185d7d0-bf11-4d98-87e7-bbd6f76f385e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.317517 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2185d7d0-bf11-4d98-87e7-bbd6f76f385e" (UID: "2185d7d0-bf11-4d98-87e7-bbd6f76f385e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.342193 4970 scope.go:117] "RemoveContainer" containerID="6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.343407 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.343427 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2185d7d0-bf11-4d98-87e7-bbd6f76f385e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.422174 4970 scope.go:117] "RemoveContainer" containerID="6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5" Sep 30 10:03:59 crc kubenswrapper[4970]: E0930 10:03:59.423174 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5\": container with ID starting with 6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5 not found: ID does not exist" containerID="6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.423276 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5"} err="failed to get container status \"6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5\": rpc error: code = NotFound desc = could not find container \"6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5\": container with ID starting with 6b6c4ce44375ce713ff93e7adad063ca4e074d503aeff4117a50151b49b7b1a5 not found: ID does not exist" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.423367 4970 scope.go:117] "RemoveContainer" containerID="6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945" Sep 30 10:03:59 crc kubenswrapper[4970]: E0930 10:03:59.423650 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945\": container with ID starting with 6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945 not found: ID does not exist" containerID="6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.423735 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945"} err="failed to get container status \"6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945\": rpc error: code = NotFound desc = could not find container \"6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945\": container with ID starting with 6cf468f025e7afe3a0ea46ae4fe4654c0fdd8ff1552b9011662a8e21c2e89945 not found: ID does not exist" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.479933 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-xrhdk"] Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.522515 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-674b76c99f-xrhdk"] Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.688321 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" path="/var/lib/kubelet/pods/2185d7d0-bf11-4d98-87e7-bbd6f76f385e/volumes" Sep 30 10:03:59 crc kubenswrapper[4970]: I0930 10:03:59.809362 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6f69b67d68-jtzdb"] Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.156403 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f69b67d68-jtzdb" event={"ID":"03969bb8-1309-49b6-910c-de58a7d7b3d4","Type":"ContainerStarted","Data":"8e7735a9020a5372c8b30c13631ec410b4873522a8215b350db8f3a3f755ca16"} Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.189688 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae142b63-43b3-488d-ab6d-327b057279b7" containerID="8b1a6f4aeeec8b807a8ba39e8e71940dbbdbcc74ded84e1e3448569270e0c732" exitCode=0 Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.189754 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" event={"ID":"ae142b63-43b3-488d-ab6d-327b057279b7","Type":"ContainerDied","Data":"8b1a6f4aeeec8b807a8ba39e8e71940dbbdbcc74ded84e1e3448569270e0c732"} Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.189779 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" event={"ID":"ae142b63-43b3-488d-ab6d-327b057279b7","Type":"ContainerStarted","Data":"deaed87996c739b5eb53eee0ad0fa3f11c27d1eebe2452e5abc0359cfe16cf7a"} Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.214499 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb0312b9-337a-4175-ae77-cd4964578d13","Type":"ContainerStarted","Data":"3261ec6cb9c284b777db2ac8ff14a2a008831e74921e00d192d01ac4e8ebb02e"} Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.214557 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.284773 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.2847522 podStartE2EDuration="5.2847522s" podCreationTimestamp="2025-09-30 10:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:00.260431063 +0000 UTC m=+1053.332281997" watchObservedRunningTime="2025-09-30 10:04:00.2847522 +0000 UTC m=+1053.356603134" Sep 30 10:04:00 crc kubenswrapper[4970]: I0930 10:04:00.844176 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.067785 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.225525 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f69b67d68-jtzdb" event={"ID":"03969bb8-1309-49b6-910c-de58a7d7b3d4","Type":"ContainerStarted","Data":"0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa"} Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.225895 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.225915 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f69b67d68-jtzdb" event={"ID":"03969bb8-1309-49b6-910c-de58a7d7b3d4","Type":"ContainerStarted","Data":"612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73"} Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.229602 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" event={"ID":"ae142b63-43b3-488d-ab6d-327b057279b7","Type":"ContainerStarted","Data":"711b84e89636e6150e556c209d473eb869d26dd245ee4923717a1649878c1817"} Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.229903 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.246391 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6f69b67d68-jtzdb" podStartSLOduration=3.246372559 podStartE2EDuration="3.246372559s" podCreationTimestamp="2025-09-30 10:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:01.243622966 +0000 UTC m=+1054.315473920" watchObservedRunningTime="2025-09-30 10:04:01.246372559 +0000 UTC m=+1054.318223483" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.287379 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" podStartSLOduration=3.28735654 podStartE2EDuration="3.28735654s" podCreationTimestamp="2025-09-30 10:03:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:01.281324289 +0000 UTC m=+1054.353175233" watchObservedRunningTime="2025-09-30 10:04:01.28735654 +0000 UTC m=+1054.359207474" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.426279 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-65fc8b84cc-9lm9w"] Sep 30 10:04:01 crc kubenswrapper[4970]: E0930 10:04:01.426778 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerName="init" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.426804 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerName="init" Sep 30 10:04:01 crc kubenswrapper[4970]: E0930 10:04:01.426828 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerName="dnsmasq-dns" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.426837 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerName="dnsmasq-dns" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.427068 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2185d7d0-bf11-4d98-87e7-bbd6f76f385e" containerName="dnsmasq-dns" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.428391 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.432652 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.432855 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.439949 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65fc8b84cc-9lm9w"] Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.460970 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501391 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-config\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501443 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-ovndb-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501564 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-internal-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501599 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-combined-ca-bundle\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501699 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-public-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501776 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-httpd-config\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.501801 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2cvf\" (UniqueName: \"kubernetes.io/projected/7244a0da-0989-4ba5-be03-aab3ab0fadce-kube-api-access-h2cvf\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.557911 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.603896 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-httpd-config\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.603936 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2cvf\" (UniqueName: \"kubernetes.io/projected/7244a0da-0989-4ba5-be03-aab3ab0fadce-kube-api-access-h2cvf\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.604011 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-config\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.604041 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-ovndb-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.604089 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-internal-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.604109 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-combined-ca-bundle\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.604149 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-public-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.610963 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-config\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.617982 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-httpd-config\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.620666 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-internal-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.621313 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-public-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.623601 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-combined-ca-bundle\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.623656 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7244a0da-0989-4ba5-be03-aab3ab0fadce-ovndb-tls-certs\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.628634 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2cvf\" (UniqueName: \"kubernetes.io/projected/7244a0da-0989-4ba5-be03-aab3ab0fadce-kube-api-access-h2cvf\") pod \"neutron-65fc8b84cc-9lm9w\" (UID: \"7244a0da-0989-4ba5-be03-aab3ab0fadce\") " pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:01 crc kubenswrapper[4970]: I0930 10:04:01.796971 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.101836 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.238173 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="cinder-scheduler" containerID="cri-o://80adca25dff044014a98debe787a64f4295607628a2f68a030f0a63963a9ea48" gracePeriod=30 Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.238200 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="probe" containerID="cri-o://b96cfd4d2eddd1faa5f6cdb248bf5fa76ce3d0c6b60f5f1b59d7b0ed64e00874" gracePeriod=30 Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.396966 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-65fc8b84cc-9lm9w"] Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.789007 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-868647ddbb-dxwsf" Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.877101 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d9bbff474-2259d"] Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.877658 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d9bbff474-2259d" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" containerID="cri-o://d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20" gracePeriod=30 Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.878618 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6d9bbff474-2259d" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api" containerID="cri-o://22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7" gracePeriod=30 Sep 30 10:04:02 crc kubenswrapper[4970]: I0930 10:04:02.910214 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d9bbff474-2259d" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": EOF" Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.246594 4970 generic.go:334] "Generic (PLEG): container finished" podID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerID="d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20" exitCode=143 Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.246635 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d9bbff474-2259d" event={"ID":"d8172f08-1f09-4a64-ad70-12b06b9cd244","Type":"ContainerDied","Data":"d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20"} Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.257416 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65fc8b84cc-9lm9w" event={"ID":"7244a0da-0989-4ba5-be03-aab3ab0fadce","Type":"ContainerStarted","Data":"c76ef5c362c740d4697134ecc1195bbf7e4e1f40f6d8fb4bc29138f41cded8a0"} Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.257459 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65fc8b84cc-9lm9w" event={"ID":"7244a0da-0989-4ba5-be03-aab3ab0fadce","Type":"ContainerStarted","Data":"817734a3e066c375b1b2b4ea81a101b8ca822031aa702ca24002e797afcc2757"} Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.257469 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-65fc8b84cc-9lm9w" event={"ID":"7244a0da-0989-4ba5-be03-aab3ab0fadce","Type":"ContainerStarted","Data":"db4cfcec0e29fca2be0c262fc7e6b77df46f5bb1a31e549729e31a5fbf9f4905"} Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.258825 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.281039 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-65fc8b84cc-9lm9w" podStartSLOduration=2.280999241 podStartE2EDuration="2.280999241s" podCreationTimestamp="2025-09-30 10:04:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:03.275752141 +0000 UTC m=+1056.347603075" watchObservedRunningTime="2025-09-30 10:04:03.280999241 +0000 UTC m=+1056.352850175" Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.678277 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-8fbb4f9c8-n8t5n" Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.736285 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6674bb4b-gp2wp"] Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.736549 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon-log" containerID="cri-o://09566df69d0b238347b6a02d51e9d164ccc3a9001a0cbb3925c87d0cac561a81" gracePeriod=30 Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.737000 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" containerID="cri-o://d403e368d182209dd7a414661db3f68d40f32e6e8d7e33e45fb6614d5c1c8d68" gracePeriod=30 Sep 30 10:04:03 crc kubenswrapper[4970]: I0930 10:04:03.747341 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.268896 4970 generic.go:334] "Generic (PLEG): container finished" podID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerID="b96cfd4d2eddd1faa5f6cdb248bf5fa76ce3d0c6b60f5f1b59d7b0ed64e00874" exitCode=0 Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.268935 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"711ac5a4-f629-436a-9cc4-2e7003919a5b","Type":"ContainerDied","Data":"b96cfd4d2eddd1faa5f6cdb248bf5fa76ce3d0c6b60f5f1b59d7b0ed64e00874"} Sep 30 10:04:04 crc kubenswrapper[4970]: E0930 10:04:04.304324 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d\": RecentStats: unable to find data in memory cache]" Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.822527 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.822900 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.822982 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.823708 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e145a3822b96bea172852c66d616934dcbd854c3400bad7729b559f1279373d"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:04:04 crc kubenswrapper[4970]: I0930 10:04:04.823761 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://7e145a3822b96bea172852c66d616934dcbd854c3400bad7729b559f1279373d" gracePeriod=600 Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.281194 4970 generic.go:334] "Generic (PLEG): container finished" podID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerID="80adca25dff044014a98debe787a64f4295607628a2f68a030f0a63963a9ea48" exitCode=0 Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.281392 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"711ac5a4-f629-436a-9cc4-2e7003919a5b","Type":"ContainerDied","Data":"80adca25dff044014a98debe787a64f4295607628a2f68a030f0a63963a9ea48"} Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.281644 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"711ac5a4-f629-436a-9cc4-2e7003919a5b","Type":"ContainerDied","Data":"b618965db6285da49280d7d7b0249714271866844514b6af7d63625fdb934f7c"} Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.281669 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b618965db6285da49280d7d7b0249714271866844514b6af7d63625fdb934f7c" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.284733 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="7e145a3822b96bea172852c66d616934dcbd854c3400bad7729b559f1279373d" exitCode=0 Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.284805 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"7e145a3822b96bea172852c66d616934dcbd854c3400bad7729b559f1279373d"} Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.284831 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"39953ec91959e4f49ceb3ed4f3dfdbe1d7a2e025175d7df609efe49614e86013"} Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.284851 4970 scope.go:117] "RemoveContainer" containerID="00bf1152552c1c478a9068852104f804953a83e3c45cb022e488704fa11cacf9" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.300616 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.415740 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-combined-ca-bundle\") pod \"711ac5a4-f629-436a-9cc4-2e7003919a5b\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.415903 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data\") pod \"711ac5a4-f629-436a-9cc4-2e7003919a5b\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.415935 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data-custom\") pod \"711ac5a4-f629-436a-9cc4-2e7003919a5b\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.416026 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvstw\" (UniqueName: \"kubernetes.io/projected/711ac5a4-f629-436a-9cc4-2e7003919a5b-kube-api-access-fvstw\") pod \"711ac5a4-f629-436a-9cc4-2e7003919a5b\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.416096 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-scripts\") pod \"711ac5a4-f629-436a-9cc4-2e7003919a5b\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.416162 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/711ac5a4-f629-436a-9cc4-2e7003919a5b-etc-machine-id\") pod \"711ac5a4-f629-436a-9cc4-2e7003919a5b\" (UID: \"711ac5a4-f629-436a-9cc4-2e7003919a5b\") " Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.442666 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/711ac5a4-f629-436a-9cc4-2e7003919a5b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "711ac5a4-f629-436a-9cc4-2e7003919a5b" (UID: "711ac5a4-f629-436a-9cc4-2e7003919a5b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.448871 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-scripts" (OuterVolumeSpecName: "scripts") pod "711ac5a4-f629-436a-9cc4-2e7003919a5b" (UID: "711ac5a4-f629-436a-9cc4-2e7003919a5b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.474515 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/711ac5a4-f629-436a-9cc4-2e7003919a5b-kube-api-access-fvstw" (OuterVolumeSpecName: "kube-api-access-fvstw") pod "711ac5a4-f629-436a-9cc4-2e7003919a5b" (UID: "711ac5a4-f629-436a-9cc4-2e7003919a5b"). InnerVolumeSpecName "kube-api-access-fvstw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.474630 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "711ac5a4-f629-436a-9cc4-2e7003919a5b" (UID: "711ac5a4-f629-436a-9cc4-2e7003919a5b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.511650 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "711ac5a4-f629-436a-9cc4-2e7003919a5b" (UID: "711ac5a4-f629-436a-9cc4-2e7003919a5b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.522294 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.522340 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.522354 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvstw\" (UniqueName: \"kubernetes.io/projected/711ac5a4-f629-436a-9cc4-2e7003919a5b-kube-api-access-fvstw\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.522369 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.522382 4970 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/711ac5a4-f629-436a-9cc4-2e7003919a5b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.631182 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data" (OuterVolumeSpecName: "config-data") pod "711ac5a4-f629-436a-9cc4-2e7003919a5b" (UID: "711ac5a4-f629-436a-9cc4-2e7003919a5b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:05 crc kubenswrapper[4970]: I0930 10:04:05.726205 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/711ac5a4-f629-436a-9cc4-2e7003919a5b-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.312185 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-6d9bbff474-2259d" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:38164->10.217.0.161:9311: read: connection reset by peer" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.318054 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.341208 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.351830 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.367961 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d9bbff474-2259d" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.367983 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6d9bbff474-2259d" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.375193 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:04:06 crc kubenswrapper[4970]: E0930 10:04:06.375709 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="probe" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.375734 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="probe" Sep 30 10:04:06 crc kubenswrapper[4970]: E0930 10:04:06.375779 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="cinder-scheduler" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.375788 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="cinder-scheduler" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.376039 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="cinder-scheduler" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.376087 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" containerName="probe" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.377404 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.381539 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.384640 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.439918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p8sj\" (UniqueName: \"kubernetes.io/projected/7510aa65-ae21-4344-94a7-9354f0822ae3-kube-api-access-4p8sj\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.439991 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-scripts\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.440079 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-config-data\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.440103 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.440140 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7510aa65-ae21-4344-94a7-9354f0822ae3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.440181 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.541365 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-scripts\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.541475 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-config-data\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.541503 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.541543 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7510aa65-ae21-4344-94a7-9354f0822ae3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.541584 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.541602 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p8sj\" (UniqueName: \"kubernetes.io/projected/7510aa65-ae21-4344-94a7-9354f0822ae3-kube-api-access-4p8sj\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.544795 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7510aa65-ae21-4344-94a7-9354f0822ae3-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.550746 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-config-data\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.555968 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.557270 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.568290 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p8sj\" (UniqueName: \"kubernetes.io/projected/7510aa65-ae21-4344-94a7-9354f0822ae3-kube-api-access-4p8sj\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.573661 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7510aa65-ae21-4344-94a7-9354f0822ae3-scripts\") pod \"cinder-scheduler-0\" (UID: \"7510aa65-ae21-4344-94a7-9354f0822ae3\") " pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.702916 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.912486 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.952545 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xsmsw\" (UniqueName: \"kubernetes.io/projected/d8172f08-1f09-4a64-ad70-12b06b9cd244-kube-api-access-xsmsw\") pod \"d8172f08-1f09-4a64-ad70-12b06b9cd244\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.952655 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8172f08-1f09-4a64-ad70-12b06b9cd244-logs\") pod \"d8172f08-1f09-4a64-ad70-12b06b9cd244\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.952680 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data-custom\") pod \"d8172f08-1f09-4a64-ad70-12b06b9cd244\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.952712 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data\") pod \"d8172f08-1f09-4a64-ad70-12b06b9cd244\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.952821 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-combined-ca-bundle\") pod \"d8172f08-1f09-4a64-ad70-12b06b9cd244\" (UID: \"d8172f08-1f09-4a64-ad70-12b06b9cd244\") " Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.955267 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8172f08-1f09-4a64-ad70-12b06b9cd244-logs" (OuterVolumeSpecName: "logs") pod "d8172f08-1f09-4a64-ad70-12b06b9cd244" (UID: "d8172f08-1f09-4a64-ad70-12b06b9cd244"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.962366 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d8172f08-1f09-4a64-ad70-12b06b9cd244" (UID: "d8172f08-1f09-4a64-ad70-12b06b9cd244"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:06 crc kubenswrapper[4970]: I0930 10:04:06.972355 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8172f08-1f09-4a64-ad70-12b06b9cd244-kube-api-access-xsmsw" (OuterVolumeSpecName: "kube-api-access-xsmsw") pod "d8172f08-1f09-4a64-ad70-12b06b9cd244" (UID: "d8172f08-1f09-4a64-ad70-12b06b9cd244"). InnerVolumeSpecName "kube-api-access-xsmsw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.011144 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d8172f08-1f09-4a64-ad70-12b06b9cd244" (UID: "d8172f08-1f09-4a64-ad70-12b06b9cd244"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.054950 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d8172f08-1f09-4a64-ad70-12b06b9cd244-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.054996 4970 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data-custom\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.055039 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.055051 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xsmsw\" (UniqueName: \"kubernetes.io/projected/d8172f08-1f09-4a64-ad70-12b06b9cd244-kube-api-access-xsmsw\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.056975 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data" (OuterVolumeSpecName: "config-data") pod "d8172f08-1f09-4a64-ad70-12b06b9cd244" (UID: "d8172f08-1f09-4a64-ad70-12b06b9cd244"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.156683 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d8172f08-1f09-4a64-ad70-12b06b9cd244-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.166437 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:42338->10.217.0.150:8443: read: connection reset by peer" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.224821 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.235394 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.336540 4970 generic.go:334] "Generic (PLEG): container finished" podID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerID="22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7" exitCode=0 Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.336647 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d9bbff474-2259d" event={"ID":"d8172f08-1f09-4a64-ad70-12b06b9cd244","Type":"ContainerDied","Data":"22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7"} Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.336683 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6d9bbff474-2259d" event={"ID":"d8172f08-1f09-4a64-ad70-12b06b9cd244","Type":"ContainerDied","Data":"78fbcb1289a22616de41945f635c0d4c94912bd25dd1ac9b1af701e022acd22b"} Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.336707 4970 scope.go:117] "RemoveContainer" containerID="22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.336847 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6d9bbff474-2259d" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.356889 4970 generic.go:334] "Generic (PLEG): container finished" podID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerID="d403e368d182209dd7a414661db3f68d40f32e6e8d7e33e45fb6614d5c1c8d68" exitCode=0 Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.356957 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6674bb4b-gp2wp" event={"ID":"48bd5d72-868c-4d81-9d1f-6a03ba997169","Type":"ContainerDied","Data":"d403e368d182209dd7a414661db3f68d40f32e6e8d7e33e45fb6614d5c1c8d68"} Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.369533 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7510aa65-ae21-4344-94a7-9354f0822ae3","Type":"ContainerStarted","Data":"ff09d4fb7fcd8447758c70475449d83d67736469b75ce29d69c079bd787f5af3"} Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.376743 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6d9bbff474-2259d"] Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.390858 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6d9bbff474-2259d"] Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.394267 4970 scope.go:117] "RemoveContainer" containerID="d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.454210 4970 scope.go:117] "RemoveContainer" containerID="22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7" Sep 30 10:04:07 crc kubenswrapper[4970]: E0930 10:04:07.456409 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7\": container with ID starting with 22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7 not found: ID does not exist" containerID="22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.456458 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7"} err="failed to get container status \"22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7\": rpc error: code = NotFound desc = could not find container \"22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7\": container with ID starting with 22f3995f5c4964fee64b72e65cdb11aa4cbc0e0a7dda793fd8061d09b4e031d7 not found: ID does not exist" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.456486 4970 scope.go:117] "RemoveContainer" containerID="d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20" Sep 30 10:04:07 crc kubenswrapper[4970]: E0930 10:04:07.457658 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20\": container with ID starting with d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20 not found: ID does not exist" containerID="d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.457700 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20"} err="failed to get container status \"d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20\": rpc error: code = NotFound desc = could not find container \"d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20\": container with ID starting with d8f9077c88f358e6a48bfa7933986676d7f93793bff59fc436af0a4b90df7e20 not found: ID does not exist" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.683596 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="711ac5a4-f629-436a-9cc4-2e7003919a5b" path="/var/lib/kubelet/pods/711ac5a4-f629-436a-9cc4-2e7003919a5b/volumes" Sep 30 10:04:07 crc kubenswrapper[4970]: I0930 10:04:07.684437 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" path="/var/lib/kubelet/pods/d8172f08-1f09-4a64-ad70-12b06b9cd244/volumes" Sep 30 10:04:08 crc kubenswrapper[4970]: I0930 10:04:08.398386 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7510aa65-ae21-4344-94a7-9354f0822ae3","Type":"ContainerStarted","Data":"1d052a9e38468ad396a18fe5c127948e88de00223c60e35368e629f6eb8ed0c4"} Sep 30 10:04:08 crc kubenswrapper[4970]: I0930 10:04:08.506250 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:04:08 crc kubenswrapper[4970]: I0930 10:04:08.566807 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-mj669"] Sep 30 10:04:08 crc kubenswrapper[4970]: I0930 10:04:08.567112 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerName="dnsmasq-dns" containerID="cri-o://8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4" gracePeriod=10 Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.097736 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.222409 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.346593 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggvnf\" (UniqueName: \"kubernetes.io/projected/7ca4f52e-812a-41ff-b9b6-0a193d76560d-kube-api-access-ggvnf\") pod \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.346747 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-swift-storage-0\") pod \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.346873 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-nb\") pod \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.346901 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-svc\") pod \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.347060 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-sb\") pod \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.347107 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-config\") pod \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\" (UID: \"7ca4f52e-812a-41ff-b9b6-0a193d76560d\") " Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.396023 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ca4f52e-812a-41ff-b9b6-0a193d76560d-kube-api-access-ggvnf" (OuterVolumeSpecName: "kube-api-access-ggvnf") pod "7ca4f52e-812a-41ff-b9b6-0a193d76560d" (UID: "7ca4f52e-812a-41ff-b9b6-0a193d76560d"). InnerVolumeSpecName "kube-api-access-ggvnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.413822 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7ca4f52e-812a-41ff-b9b6-0a193d76560d" (UID: "7ca4f52e-812a-41ff-b9b6-0a193d76560d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.421508 4970 generic.go:334] "Generic (PLEG): container finished" podID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerID="8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4" exitCode=0 Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.421598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" event={"ID":"7ca4f52e-812a-41ff-b9b6-0a193d76560d","Type":"ContainerDied","Data":"8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4"} Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.421633 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" event={"ID":"7ca4f52e-812a-41ff-b9b6-0a193d76560d","Type":"ContainerDied","Data":"0ead9b7e6a477225c1391505e74a3dee3961c672d4e6c4e183b52878caf48e6d"} Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.421660 4970 scope.go:117] "RemoveContainer" containerID="8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.421821 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-mj669" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.428869 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7510aa65-ae21-4344-94a7-9354f0822ae3","Type":"ContainerStarted","Data":"2917cdbabc0b4f763c769327d709dfab0c0c3cd9600bafd298d46b6ce321d7f9"} Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.449502 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.449546 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggvnf\" (UniqueName: \"kubernetes.io/projected/7ca4f52e-812a-41ff-b9b6-0a193d76560d-kube-api-access-ggvnf\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.505209 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-config" (OuterVolumeSpecName: "config") pod "7ca4f52e-812a-41ff-b9b6-0a193d76560d" (UID: "7ca4f52e-812a-41ff-b9b6-0a193d76560d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.514335 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.5143155 podStartE2EDuration="3.5143155s" podCreationTimestamp="2025-09-30 10:04:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:09.472910139 +0000 UTC m=+1062.544761073" watchObservedRunningTime="2025-09-30 10:04:09.5143155 +0000 UTC m=+1062.586166434" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.543482 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7ca4f52e-812a-41ff-b9b6-0a193d76560d" (UID: "7ca4f52e-812a-41ff-b9b6-0a193d76560d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.552253 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.552294 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.585162 4970 scope.go:117] "RemoveContainer" containerID="399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.585191 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7ca4f52e-812a-41ff-b9b6-0a193d76560d" (UID: "7ca4f52e-812a-41ff-b9b6-0a193d76560d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.592604 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7ca4f52e-812a-41ff-b9b6-0a193d76560d" (UID: "7ca4f52e-812a-41ff-b9b6-0a193d76560d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.626085 4970 scope.go:117] "RemoveContainer" containerID="8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4" Sep 30 10:04:09 crc kubenswrapper[4970]: E0930 10:04:09.627829 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4\": container with ID starting with 8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4 not found: ID does not exist" containerID="8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.627868 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4"} err="failed to get container status \"8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4\": rpc error: code = NotFound desc = could not find container \"8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4\": container with ID starting with 8fe89857112447281f0a753d8456ccca324f0f7d5ca09a6a54b5038f332ee4c4 not found: ID does not exist" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.627892 4970 scope.go:117] "RemoveContainer" containerID="399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b" Sep 30 10:04:09 crc kubenswrapper[4970]: E0930 10:04:09.628217 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b\": container with ID starting with 399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b not found: ID does not exist" containerID="399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.628262 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b"} err="failed to get container status \"399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b\": rpc error: code = NotFound desc = could not find container \"399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b\": container with ID starting with 399fc3993063bd6ca7f3ddd4a1b99c6dfdd26ffb727a23176b8e6aae6d4fb95b not found: ID does not exist" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.653952 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.654011 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7ca4f52e-812a-41ff-b9b6-0a193d76560d-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.744170 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-mj669"] Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.747933 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-mj669"] Sep 30 10:04:09 crc kubenswrapper[4970]: I0930 10:04:09.757596 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6cf9989bfd-qxb2j" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.051855 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:11 crc kubenswrapper[4970]: E0930 10:04:11.052585 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerName="dnsmasq-dns" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052600 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerName="dnsmasq-dns" Sep 30 10:04:11 crc kubenswrapper[4970]: E0930 10:04:11.052623 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052629 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api" Sep 30 10:04:11 crc kubenswrapper[4970]: E0930 10:04:11.052643 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052649 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" Sep 30 10:04:11 crc kubenswrapper[4970]: E0930 10:04:11.052668 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerName="init" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052673 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerName="init" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052841 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api-log" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052850 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8172f08-1f09-4a64-ad70-12b06b9cd244" containerName="barbican-api" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.052871 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" containerName="dnsmasq-dns" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.053576 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.057217 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.057896 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.058051 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-4s4xz" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.064799 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.182224 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.182383 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pfgd\" (UniqueName: \"kubernetes.io/projected/910acaa9-2831-4c1c-8dca-c8bab23a65a5-kube-api-access-2pfgd\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.182429 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.182472 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config-secret\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.284201 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config-secret\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.284308 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.284396 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pfgd\" (UniqueName: \"kubernetes.io/projected/910acaa9-2831-4c1c-8dca-c8bab23a65a5-kube-api-access-2pfgd\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.284428 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.285314 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.291116 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.292778 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config-secret\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.304579 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pfgd\" (UniqueName: \"kubernetes.io/projected/910acaa9-2831-4c1c-8dca-c8bab23a65a5-kube-api-access-2pfgd\") pod \"openstackclient\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.370908 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.455055 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.480921 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.495548 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.496980 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.516449 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.624574 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8d91e0e8-ee07-493a-bb4d-6949ce548047-openstack-config\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.624675 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d91e0e8-ee07-493a-bb4d-6949ce548047-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.624736 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8d91e0e8-ee07-493a-bb4d-6949ce548047-openstack-config-secret\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.624830 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cvx4\" (UniqueName: \"kubernetes.io/projected/8d91e0e8-ee07-493a-bb4d-6949ce548047-kube-api-access-8cvx4\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.691789 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ca4f52e-812a-41ff-b9b6-0a193d76560d" path="/var/lib/kubelet/pods/7ca4f52e-812a-41ff-b9b6-0a193d76560d/volumes" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.704075 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.727230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8d91e0e8-ee07-493a-bb4d-6949ce548047-openstack-config\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.727318 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d91e0e8-ee07-493a-bb4d-6949ce548047-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.727347 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8d91e0e8-ee07-493a-bb4d-6949ce548047-openstack-config-secret\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.727375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cvx4\" (UniqueName: \"kubernetes.io/projected/8d91e0e8-ee07-493a-bb4d-6949ce548047-kube-api-access-8cvx4\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.729196 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8d91e0e8-ee07-493a-bb4d-6949ce548047-openstack-config\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.743593 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8d91e0e8-ee07-493a-bb4d-6949ce548047-openstack-config-secret\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.744462 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d91e0e8-ee07-493a-bb4d-6949ce548047-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.751216 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cvx4\" (UniqueName: \"kubernetes.io/projected/8d91e0e8-ee07-493a-bb4d-6949ce548047-kube-api-access-8cvx4\") pod \"openstackclient\" (UID: \"8d91e0e8-ee07-493a-bb4d-6949ce548047\") " pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: E0930 10:04:11.796325 4970 log.go:32] "RunPodSandbox from runtime service failed" err=< Sep 30 10:04:11 crc kubenswrapper[4970]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_910acaa9-2831-4c1c-8dca-c8bab23a65a5_0(9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3" Netns:"/var/run/netns/111e7d66-f14e-4d00-a533-c43260084721" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3;K8S_POD_UID=910acaa9-2831-4c1c-8dca-c8bab23a65a5" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/910acaa9-2831-4c1c-8dca-c8bab23a65a5:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3 network default NAD default] [openstack/openstackclient 9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3 network default NAD default] pod deleted before sandbox ADD operation began Sep 30 10:04:11 crc kubenswrapper[4970]: ' Sep 30 10:04:11 crc kubenswrapper[4970]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Sep 30 10:04:11 crc kubenswrapper[4970]: > Sep 30 10:04:11 crc kubenswrapper[4970]: E0930 10:04:11.796410 4970 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Sep 30 10:04:11 crc kubenswrapper[4970]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_910acaa9-2831-4c1c-8dca-c8bab23a65a5_0(9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3" Netns:"/var/run/netns/111e7d66-f14e-4d00-a533-c43260084721" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3;K8S_POD_UID=910acaa9-2831-4c1c-8dca-c8bab23a65a5" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: [openstack/openstackclient/910acaa9-2831-4c1c-8dca-c8bab23a65a5:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openstack/openstackclient 9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3 network default NAD default] [openstack/openstackclient 9b9f3bec1ad0c14be03136d4ecca7e87c2aae9bbc59fd4971633d61bd5080ce3 network default NAD default] pod deleted before sandbox ADD operation began Sep 30 10:04:11 crc kubenswrapper[4970]: ' Sep 30 10:04:11 crc kubenswrapper[4970]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Sep 30 10:04:11 crc kubenswrapper[4970]: > pod="openstack/openstackclient" Sep 30 10:04:11 crc kubenswrapper[4970]: I0930 10:04:11.826398 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.292195 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Sep 30 10:04:12 crc kubenswrapper[4970]: W0930 10:04:12.296429 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d91e0e8_ee07_493a_bb4d_6949ce548047.slice/crio-b5d293752e9ab70054f17ea38ebb004e26b21429d88c7686aa8f6feabc5082d5 WatchSource:0}: Error finding container b5d293752e9ab70054f17ea38ebb004e26b21429d88c7686aa8f6feabc5082d5: Status 404 returned error can't find the container with id b5d293752e9ab70054f17ea38ebb004e26b21429d88c7686aa8f6feabc5082d5 Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.462621 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.462791 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8d91e0e8-ee07-493a-bb4d-6949ce548047","Type":"ContainerStarted","Data":"b5d293752e9ab70054f17ea38ebb004e26b21429d88c7686aa8f6feabc5082d5"} Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.477269 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.480351 4970 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="910acaa9-2831-4c1c-8dca-c8bab23a65a5" podUID="8d91e0e8-ee07-493a-bb4d-6949ce548047" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.656332 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2pfgd\" (UniqueName: \"kubernetes.io/projected/910acaa9-2831-4c1c-8dca-c8bab23a65a5-kube-api-access-2pfgd\") pod \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.656703 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-combined-ca-bundle\") pod \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.656742 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config-secret\") pod \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.656889 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config\") pod \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\" (UID: \"910acaa9-2831-4c1c-8dca-c8bab23a65a5\") " Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.657750 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "910acaa9-2831-4c1c-8dca-c8bab23a65a5" (UID: "910acaa9-2831-4c1c-8dca-c8bab23a65a5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.666866 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "910acaa9-2831-4c1c-8dca-c8bab23a65a5" (UID: "910acaa9-2831-4c1c-8dca-c8bab23a65a5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.679301 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "910acaa9-2831-4c1c-8dca-c8bab23a65a5" (UID: "910acaa9-2831-4c1c-8dca-c8bab23a65a5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.682680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/910acaa9-2831-4c1c-8dca-c8bab23a65a5-kube-api-access-2pfgd" (OuterVolumeSpecName: "kube-api-access-2pfgd") pod "910acaa9-2831-4c1c-8dca-c8bab23a65a5" (UID: "910acaa9-2831-4c1c-8dca-c8bab23a65a5"). InnerVolumeSpecName "kube-api-access-2pfgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.762169 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.762365 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2pfgd\" (UniqueName: \"kubernetes.io/projected/910acaa9-2831-4c1c-8dca-c8bab23a65a5-kube-api-access-2pfgd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.762425 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:12 crc kubenswrapper[4970]: I0930 10:04:12.762479 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/910acaa9-2831-4c1c-8dca-c8bab23a65a5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:13 crc kubenswrapper[4970]: I0930 10:04:13.469029 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Sep 30 10:04:13 crc kubenswrapper[4970]: I0930 10:04:13.489957 4970 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="910acaa9-2831-4c1c-8dca-c8bab23a65a5" podUID="8d91e0e8-ee07-493a-bb4d-6949ce548047" Sep 30 10:04:13 crc kubenswrapper[4970]: I0930 10:04:13.681340 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="910acaa9-2831-4c1c-8dca-c8bab23a65a5" path="/var/lib/kubelet/pods/910acaa9-2831-4c1c-8dca-c8bab23a65a5/volumes" Sep 30 10:04:14 crc kubenswrapper[4970]: E0930 10:04:14.551415 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d\": RecentStats: unable to find data in memory cache]" Sep 30 10:04:14 crc kubenswrapper[4970]: I0930 10:04:14.750846 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:14 crc kubenswrapper[4970]: I0930 10:04:14.751833 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-central-agent" containerID="cri-o://b9009d88c7ee0430a93c6bd87022a31e376e616212194813caa0d493a547d591" gracePeriod=30 Sep 30 10:04:14 crc kubenswrapper[4970]: I0930 10:04:14.752265 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-notification-agent" containerID="cri-o://dc4adcdf28c7d1fecb7cb582dd272479236654301e9a9826055c3ed1073d8cdf" gracePeriod=30 Sep 30 10:04:14 crc kubenswrapper[4970]: I0930 10:04:14.752213 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="proxy-httpd" containerID="cri-o://127663ecbe2cc1d500f4ae987c736860c7b8cad788aca84c6e8f3cde29e83765" gracePeriod=30 Sep 30 10:04:14 crc kubenswrapper[4970]: I0930 10:04:14.752198 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="sg-core" containerID="cri-o://cc73d4cfcc8b59cb9d39c7ff5c1e649b070263868227c4cb47574f82d8d9577e" gracePeriod=30 Sep 30 10:04:14 crc kubenswrapper[4970]: I0930 10:04:14.866509 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": read tcp 10.217.0.2:53958->10.217.0.166:3000: read: connection reset by peer" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.264172 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vkx9z"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.265390 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.283174 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vkx9z"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.330103 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdlhh\" (UniqueName: \"kubernetes.io/projected/9766459a-e1c1-48a0-a45c-bda24281c6d6-kube-api-access-xdlhh\") pod \"nova-api-db-create-vkx9z\" (UID: \"9766459a-e1c1-48a0-a45c-bda24281c6d6\") " pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.432705 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdlhh\" (UniqueName: \"kubernetes.io/projected/9766459a-e1c1-48a0-a45c-bda24281c6d6-kube-api-access-xdlhh\") pod \"nova-api-db-create-vkx9z\" (UID: \"9766459a-e1c1-48a0-a45c-bda24281c6d6\") " pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.462381 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdlhh\" (UniqueName: \"kubernetes.io/projected/9766459a-e1c1-48a0-a45c-bda24281c6d6-kube-api-access-xdlhh\") pod \"nova-api-db-create-vkx9z\" (UID: \"9766459a-e1c1-48a0-a45c-bda24281c6d6\") " pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.471642 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-vpsk6"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.473583 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.534644 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jfd\" (UniqueName: \"kubernetes.io/projected/f300b93d-68bd-45af-a07a-4dcd57af3f00-kube-api-access-j4jfd\") pod \"nova-cell0-db-create-vpsk6\" (UID: \"f300b93d-68bd-45af-a07a-4dcd57af3f00\") " pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.543056 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vpsk6"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.569461 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-597dc56955-zfx9s"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.571636 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.572604 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerID="127663ecbe2cc1d500f4ae987c736860c7b8cad788aca84c6e8f3cde29e83765" exitCode=0 Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.572630 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerID="cc73d4cfcc8b59cb9d39c7ff5c1e649b070263868227c4cb47574f82d8d9577e" exitCode=2 Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.572637 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerID="b9009d88c7ee0430a93c6bd87022a31e376e616212194813caa0d493a547d591" exitCode=0 Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.572651 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerDied","Data":"127663ecbe2cc1d500f4ae987c736860c7b8cad788aca84c6e8f3cde29e83765"} Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.572672 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerDied","Data":"cc73d4cfcc8b59cb9d39c7ff5c1e649b070263868227c4cb47574f82d8d9577e"} Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.572682 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerDied","Data":"b9009d88c7ee0430a93c6bd87022a31e376e616212194813caa0d493a547d591"} Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.576219 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.576368 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-597dc56955-zfx9s"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.579461 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.579791 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.581516 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.606546 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-lrs4s"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.607833 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.618378 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lrs4s"] Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637204 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-internal-tls-certs\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637277 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-run-httpd\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637306 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4jfd\" (UniqueName: \"kubernetes.io/projected/f300b93d-68bd-45af-a07a-4dcd57af3f00-kube-api-access-j4jfd\") pod \"nova-cell0-db-create-vpsk6\" (UID: \"f300b93d-68bd-45af-a07a-4dcd57af3f00\") " pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637342 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-combined-ca-bundle\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637363 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-public-tls-certs\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637406 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-config-data\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637495 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pklxt\" (UniqueName: \"kubernetes.io/projected/0881a1bd-fe20-475f-9cf8-9869a3c11344-kube-api-access-pklxt\") pod \"nova-cell1-db-create-lrs4s\" (UID: \"0881a1bd-fe20-475f-9cf8-9869a3c11344\") " pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637524 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sg2r\" (UniqueName: \"kubernetes.io/projected/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-kube-api-access-5sg2r\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637584 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-etc-swift\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.637616 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-log-httpd\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.662644 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4jfd\" (UniqueName: \"kubernetes.io/projected/f300b93d-68bd-45af-a07a-4dcd57af3f00-kube-api-access-j4jfd\") pod \"nova-cell0-db-create-vpsk6\" (UID: \"f300b93d-68bd-45af-a07a-4dcd57af3f00\") " pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.738857 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-run-httpd\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.738902 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-combined-ca-bundle\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.738922 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-public-tls-certs\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.738952 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-config-data\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.739039 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pklxt\" (UniqueName: \"kubernetes.io/projected/0881a1bd-fe20-475f-9cf8-9869a3c11344-kube-api-access-pklxt\") pod \"nova-cell1-db-create-lrs4s\" (UID: \"0881a1bd-fe20-475f-9cf8-9869a3c11344\") " pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.739070 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5sg2r\" (UniqueName: \"kubernetes.io/projected/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-kube-api-access-5sg2r\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.739102 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-etc-swift\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.739136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-log-httpd\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.739183 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-internal-tls-certs\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.741460 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-run-httpd\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.741630 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-log-httpd\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.746598 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-config-data\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.746691 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-public-tls-certs\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.749100 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-combined-ca-bundle\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.762195 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-etc-swift\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.762772 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5sg2r\" (UniqueName: \"kubernetes.io/projected/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-kube-api-access-5sg2r\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.763792 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pklxt\" (UniqueName: \"kubernetes.io/projected/0881a1bd-fe20-475f-9cf8-9869a3c11344-kube-api-access-pklxt\") pod \"nova-cell1-db-create-lrs4s\" (UID: \"0881a1bd-fe20-475f-9cf8-9869a3c11344\") " pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.764513 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1ade2be1-8027-4a99-ae4d-f0394e4d9c1d-internal-tls-certs\") pod \"swift-proxy-597dc56955-zfx9s\" (UID: \"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d\") " pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.864601 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.930458 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:15 crc kubenswrapper[4970]: I0930 10:04:15.957473 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.158543 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vkx9z"] Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.496534 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-vpsk6"] Sep 30 10:04:16 crc kubenswrapper[4970]: W0930 10:04:16.505981 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf300b93d_68bd_45af_a07a_4dcd57af3f00.slice/crio-9552c39349c442d54bafa39212780435a7127344f648e2919542835a9d27f2a3 WatchSource:0}: Error finding container 9552c39349c442d54bafa39212780435a7127344f648e2919542835a9d27f2a3: Status 404 returned error can't find the container with id 9552c39349c442d54bafa39212780435a7127344f648e2919542835a9d27f2a3 Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.601194 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpsk6" event={"ID":"f300b93d-68bd-45af-a07a-4dcd57af3f00","Type":"ContainerStarted","Data":"9552c39349c442d54bafa39212780435a7127344f648e2919542835a9d27f2a3"} Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.606263 4970 generic.go:334] "Generic (PLEG): container finished" podID="9766459a-e1c1-48a0-a45c-bda24281c6d6" containerID="3d42c78a809f5b69841ce96521ceb6ca09c3a19e72afb66cc9ec4aaa31bc574e" exitCode=0 Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.606289 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vkx9z" event={"ID":"9766459a-e1c1-48a0-a45c-bda24281c6d6","Type":"ContainerDied","Data":"3d42c78a809f5b69841ce96521ceb6ca09c3a19e72afb66cc9ec4aaa31bc574e"} Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.606303 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vkx9z" event={"ID":"9766459a-e1c1-48a0-a45c-bda24281c6d6","Type":"ContainerStarted","Data":"955b7c1b325f80a106fa42ccce6d525bd7853609d52fa9cdd251ca4a595029e5"} Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.817333 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-597dc56955-zfx9s"] Sep 30 10:04:16 crc kubenswrapper[4970]: I0930 10:04:16.828286 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-lrs4s"] Sep 30 10:04:16 crc kubenswrapper[4970]: W0930 10:04:16.890820 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0881a1bd_fe20_475f_9cf8_9869a3c11344.slice/crio-0d147d350824d7f511bcde7798aee70b6f22a6e5416b12594f4d4ed899f6a42b WatchSource:0}: Error finding container 0d147d350824d7f511bcde7798aee70b6f22a6e5416b12594f4d4ed899f6a42b: Status 404 returned error can't find the container with id 0d147d350824d7f511bcde7798aee70b6f22a6e5416b12594f4d4ed899f6a42b Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.016773 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.234078 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.477466 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.489714 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-8cc9569d-ll5d9" Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.621088 4970 generic.go:334] "Generic (PLEG): container finished" podID="0881a1bd-fe20-475f-9cf8-9869a3c11344" containerID="4b788ce585a535ed51047893a29ec7952437e6c58df74da4ccee06bc2091daf9" exitCode=0 Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.621157 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lrs4s" event={"ID":"0881a1bd-fe20-475f-9cf8-9869a3c11344","Type":"ContainerDied","Data":"4b788ce585a535ed51047893a29ec7952437e6c58df74da4ccee06bc2091daf9"} Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.621186 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lrs4s" event={"ID":"0881a1bd-fe20-475f-9cf8-9869a3c11344","Type":"ContainerStarted","Data":"0d147d350824d7f511bcde7798aee70b6f22a6e5416b12594f4d4ed899f6a42b"} Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.623836 4970 generic.go:334] "Generic (PLEG): container finished" podID="f300b93d-68bd-45af-a07a-4dcd57af3f00" containerID="ebfcb695f4d298729bb91fb24f13d4a986cc5758849f62350c775c3d08c7bfa7" exitCode=0 Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.623873 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpsk6" event={"ID":"f300b93d-68bd-45af-a07a-4dcd57af3f00","Type":"ContainerDied","Data":"ebfcb695f4d298729bb91fb24f13d4a986cc5758849f62350c775c3d08c7bfa7"} Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.626498 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-597dc56955-zfx9s" event={"ID":"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d","Type":"ContainerStarted","Data":"1491bcd88f6f51c3b830fdd9d59be1ce613a277f5cb69f3369f7f20e477ccec8"} Sep 30 10:04:17 crc kubenswrapper[4970]: I0930 10:04:17.626530 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-597dc56955-zfx9s" event={"ID":"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d","Type":"ContainerStarted","Data":"2330a52e51b4cf1b2d5f5024b002df84c5f27a52e68ba1beee2f47526dbc9dc5"} Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.153831 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.301442 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdlhh\" (UniqueName: \"kubernetes.io/projected/9766459a-e1c1-48a0-a45c-bda24281c6d6-kube-api-access-xdlhh\") pod \"9766459a-e1c1-48a0-a45c-bda24281c6d6\" (UID: \"9766459a-e1c1-48a0-a45c-bda24281c6d6\") " Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.324899 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9766459a-e1c1-48a0-a45c-bda24281c6d6-kube-api-access-xdlhh" (OuterVolumeSpecName: "kube-api-access-xdlhh") pod "9766459a-e1c1-48a0-a45c-bda24281c6d6" (UID: "9766459a-e1c1-48a0-a45c-bda24281c6d6"). InnerVolumeSpecName "kube-api-access-xdlhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.405004 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdlhh\" (UniqueName: \"kubernetes.io/projected/9766459a-e1c1-48a0-a45c-bda24281c6d6-kube-api-access-xdlhh\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.655283 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vkx9z" event={"ID":"9766459a-e1c1-48a0-a45c-bda24281c6d6","Type":"ContainerDied","Data":"955b7c1b325f80a106fa42ccce6d525bd7853609d52fa9cdd251ca4a595029e5"} Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.655332 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955b7c1b325f80a106fa42ccce6d525bd7853609d52fa9cdd251ca4a595029e5" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.655438 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vkx9z" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.670296 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-597dc56955-zfx9s" event={"ID":"1ade2be1-8027-4a99-ae4d-f0394e4d9c1d","Type":"ContainerStarted","Data":"89e384aa7d03f50edf5f0c1cca9cb8735f954346afb402a93318a005c0cb1c5c"} Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.670351 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.670369 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:18 crc kubenswrapper[4970]: I0930 10:04:18.723765 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-597dc56955-zfx9s" podStartSLOduration=3.723738515 podStartE2EDuration="3.723738515s" podCreationTimestamp="2025-09-30 10:04:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:18.710158374 +0000 UTC m=+1071.782009328" watchObservedRunningTime="2025-09-30 10:04:18.723738515 +0000 UTC m=+1071.795589449" Sep 30 10:04:20 crc kubenswrapper[4970]: I0930 10:04:20.693864 4970 generic.go:334] "Generic (PLEG): container finished" podID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerID="dc4adcdf28c7d1fecb7cb582dd272479236654301e9a9826055c3ed1073d8cdf" exitCode=0 Sep 30 10:04:20 crc kubenswrapper[4970]: I0930 10:04:20.693957 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerDied","Data":"dc4adcdf28c7d1fecb7cb582dd272479236654301e9a9826055c3ed1073d8cdf"} Sep 30 10:04:22 crc kubenswrapper[4970]: I0930 10:04:22.493347 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.166:3000/\": dial tcp 10.217.0.166:3000: connect: connection refused" Sep 30 10:04:24 crc kubenswrapper[4970]: E0930 10:04:24.810684 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd890158e_f620_412b_ad4f_11437ded0689.slice/crio-660b0e488580dc7142b245aa3302eb7db047cb78a041e9e408641c278b8e033d\": RecentStats: unable to find data in memory cache]" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.393709 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.394291 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-log" containerID="cri-o://0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910" gracePeriod=30 Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.394390 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-httpd" containerID="cri-o://ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5" gracePeriod=30 Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.421568 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.432762 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.450457 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4jfd\" (UniqueName: \"kubernetes.io/projected/f300b93d-68bd-45af-a07a-4dcd57af3f00-kube-api-access-j4jfd\") pod \"f300b93d-68bd-45af-a07a-4dcd57af3f00\" (UID: \"f300b93d-68bd-45af-a07a-4dcd57af3f00\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.459605 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f300b93d-68bd-45af-a07a-4dcd57af3f00-kube-api-access-j4jfd" (OuterVolumeSpecName: "kube-api-access-j4jfd") pod "f300b93d-68bd-45af-a07a-4dcd57af3f00" (UID: "f300b93d-68bd-45af-a07a-4dcd57af3f00"). InnerVolumeSpecName "kube-api-access-j4jfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.557303 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pklxt\" (UniqueName: \"kubernetes.io/projected/0881a1bd-fe20-475f-9cf8-9869a3c11344-kube-api-access-pklxt\") pod \"0881a1bd-fe20-475f-9cf8-9869a3c11344\" (UID: \"0881a1bd-fe20-475f-9cf8-9869a3c11344\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.558044 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4jfd\" (UniqueName: \"kubernetes.io/projected/f300b93d-68bd-45af-a07a-4dcd57af3f00-kube-api-access-j4jfd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.596104 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0881a1bd-fe20-475f-9cf8-9869a3c11344-kube-api-access-pklxt" (OuterVolumeSpecName: "kube-api-access-pklxt") pod "0881a1bd-fe20-475f-9cf8-9869a3c11344" (UID: "0881a1bd-fe20-475f-9cf8-9869a3c11344"). InnerVolumeSpecName "kube-api-access-pklxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653059 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-03f9-account-create-wj49d"] Sep 30 10:04:25 crc kubenswrapper[4970]: E0930 10:04:25.653474 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0881a1bd-fe20-475f-9cf8-9869a3c11344" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653491 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0881a1bd-fe20-475f-9cf8-9869a3c11344" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: E0930 10:04:25.653500 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f300b93d-68bd-45af-a07a-4dcd57af3f00" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653506 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f300b93d-68bd-45af-a07a-4dcd57af3f00" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: E0930 10:04:25.653526 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9766459a-e1c1-48a0-a45c-bda24281c6d6" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653534 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9766459a-e1c1-48a0-a45c-bda24281c6d6" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653685 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9766459a-e1c1-48a0-a45c-bda24281c6d6" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653704 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0881a1bd-fe20-475f-9cf8-9869a3c11344" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.653713 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f300b93d-68bd-45af-a07a-4dcd57af3f00" containerName="mariadb-database-create" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.654384 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.664077 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pklxt\" (UniqueName: \"kubernetes.io/projected/0881a1bd-fe20-475f-9cf8-9869a3c11344-kube-api-access-pklxt\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.664397 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-03f9-account-create-wj49d"] Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.672394 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.766197 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4ds2\" (UniqueName: \"kubernetes.io/projected/904d5a82-7c31-44b2-b222-58009f2f53ea-kube-api-access-l4ds2\") pod \"nova-api-03f9-account-create-wj49d\" (UID: \"904d5a82-7c31-44b2-b222-58009f2f53ea\") " pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.789209 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-vpsk6" event={"ID":"f300b93d-68bd-45af-a07a-4dcd57af3f00","Type":"ContainerDied","Data":"9552c39349c442d54bafa39212780435a7127344f648e2919542835a9d27f2a3"} Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.789266 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9552c39349c442d54bafa39212780435a7127344f648e2919542835a9d27f2a3" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.789500 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-vpsk6" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.795955 4970 generic.go:334] "Generic (PLEG): container finished" podID="ce89199d-435e-4383-a28f-c6326ec1f954" containerID="0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910" exitCode=143 Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.796050 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce89199d-435e-4383-a28f-c6326ec1f954","Type":"ContainerDied","Data":"0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910"} Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.797915 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-lrs4s" event={"ID":"0881a1bd-fe20-475f-9cf8-9869a3c11344","Type":"ContainerDied","Data":"0d147d350824d7f511bcde7798aee70b6f22a6e5416b12594f4d4ed899f6a42b"} Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.797940 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d147d350824d7f511bcde7798aee70b6f22a6e5416b12594f4d4ed899f6a42b" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.798005 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-lrs4s" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.841212 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870504 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-run-httpd\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870555 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-config-data\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870608 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-sg-core-conf-yaml\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870654 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-combined-ca-bundle\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870812 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-scripts\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870924 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7k57\" (UniqueName: \"kubernetes.io/projected/9eb0de84-58b0-45c0-9aff-7b2a82d73079-kube-api-access-h7k57\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.870950 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-log-httpd\") pod \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\" (UID: \"9eb0de84-58b0-45c0-9aff-7b2a82d73079\") " Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.871358 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4ds2\" (UniqueName: \"kubernetes.io/projected/904d5a82-7c31-44b2-b222-58009f2f53ea-kube-api-access-l4ds2\") pod \"nova-api-03f9-account-create-wj49d\" (UID: \"904d5a82-7c31-44b2-b222-58009f2f53ea\") " pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.872647 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.873948 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.897868 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-scripts" (OuterVolumeSpecName: "scripts") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.898033 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9eb0de84-58b0-45c0-9aff-7b2a82d73079-kube-api-access-h7k57" (OuterVolumeSpecName: "kube-api-access-h7k57") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "kube-api-access-h7k57". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.899832 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.900567 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4ds2\" (UniqueName: \"kubernetes.io/projected/904d5a82-7c31-44b2-b222-58009f2f53ea-kube-api-access-l4ds2\") pod \"nova-api-03f9-account-create-wj49d\" (UID: \"904d5a82-7c31-44b2-b222-58009f2f53ea\") " pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.937687 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.939575 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-597dc56955-zfx9s" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.962159 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.980608 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7k57\" (UniqueName: \"kubernetes.io/projected/9eb0de84-58b0-45c0-9aff-7b2a82d73079-kube-api-access-h7k57\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.980642 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.980653 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9eb0de84-58b0-45c0-9aff-7b2a82d73079-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.980662 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.980671 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.980679 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:25 crc kubenswrapper[4970]: I0930 10:04:25.998154 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.020972 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-config-data" (OuterVolumeSpecName: "config-data") pod "9eb0de84-58b0-45c0-9aff-7b2a82d73079" (UID: "9eb0de84-58b0-45c0-9aff-7b2a82d73079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.082977 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9eb0de84-58b0-45c0-9aff-7b2a82d73079-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.460634 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-03f9-account-create-wj49d"] Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.810825 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8d91e0e8-ee07-493a-bb4d-6949ce548047","Type":"ContainerStarted","Data":"64c66610860812636faa0a4ea73db6eaf416f913f23f2d5f954fe98fa0289956"} Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.813265 4970 generic.go:334] "Generic (PLEG): container finished" podID="904d5a82-7c31-44b2-b222-58009f2f53ea" containerID="bb32fcdd0f48738a6fe2f819a80e303788ca253381b61529c98f614911741314" exitCode=0 Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.813333 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-03f9-account-create-wj49d" event={"ID":"904d5a82-7c31-44b2-b222-58009f2f53ea","Type":"ContainerDied","Data":"bb32fcdd0f48738a6fe2f819a80e303788ca253381b61529c98f614911741314"} Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.813384 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-03f9-account-create-wj49d" event={"ID":"904d5a82-7c31-44b2-b222-58009f2f53ea","Type":"ContainerStarted","Data":"268cad65bdc83938c70efcbcd41f06269e19c61ec6354dd8356d50ee40604f7a"} Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.816756 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9eb0de84-58b0-45c0-9aff-7b2a82d73079","Type":"ContainerDied","Data":"85e0f9c09d86b13337bf2d00a50c4e393f67012306569700224b6fa5c37f3f56"} Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.816810 4970 scope.go:117] "RemoveContainer" containerID="127663ecbe2cc1d500f4ae987c736860c7b8cad788aca84c6e8f3cde29e83765" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.816849 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.843554 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.843838 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-log" containerID="cri-o://a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4" gracePeriod=30 Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.844281 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-httpd" containerID="cri-o://9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374" gracePeriod=30 Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.853450 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.717680843 podStartE2EDuration="15.853414676s" podCreationTimestamp="2025-09-30 10:04:11 +0000 UTC" firstStartedPulling="2025-09-30 10:04:12.299102485 +0000 UTC m=+1065.370953419" lastFinishedPulling="2025-09-30 10:04:25.434836318 +0000 UTC m=+1078.506687252" observedRunningTime="2025-09-30 10:04:26.841462748 +0000 UTC m=+1079.913313682" watchObservedRunningTime="2025-09-30 10:04:26.853414676 +0000 UTC m=+1079.925265610" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.860846 4970 scope.go:117] "RemoveContainer" containerID="cc73d4cfcc8b59cb9d39c7ff5c1e649b070263868227c4cb47574f82d8d9577e" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.898100 4970 scope.go:117] "RemoveContainer" containerID="dc4adcdf28c7d1fecb7cb582dd272479236654301e9a9826055c3ed1073d8cdf" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.907463 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.925683 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.945859 4970 scope.go:117] "RemoveContainer" containerID="b9009d88c7ee0430a93c6bd87022a31e376e616212194813caa0d493a547d591" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.967465 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:26 crc kubenswrapper[4970]: E0930 10:04:26.967972 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="proxy-httpd" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968006 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="proxy-httpd" Sep 30 10:04:26 crc kubenswrapper[4970]: E0930 10:04:26.968024 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-notification-agent" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968031 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-notification-agent" Sep 30 10:04:26 crc kubenswrapper[4970]: E0930 10:04:26.968048 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-central-agent" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968055 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-central-agent" Sep 30 10:04:26 crc kubenswrapper[4970]: E0930 10:04:26.968064 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="sg-core" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968070 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="sg-core" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968270 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="proxy-httpd" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968294 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-central-agent" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968309 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="sg-core" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.968324 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" containerName="ceilometer-notification-agent" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.970126 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.973512 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.973603 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:04:26 crc kubenswrapper[4970]: I0930 10:04:26.981974 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116376 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116468 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-log-httpd\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116503 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-scripts\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116604 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcmgp\" (UniqueName: \"kubernetes.io/projected/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-kube-api-access-vcmgp\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116670 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-config-data\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116816 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-run-httpd\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.116923 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219013 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-scripts\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219085 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcmgp\" (UniqueName: \"kubernetes.io/projected/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-kube-api-access-vcmgp\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219130 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-config-data\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219171 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-run-httpd\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219192 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219242 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219276 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-log-httpd\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219670 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-log-httpd\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.219954 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-run-httpd\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.228525 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.229073 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.229118 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-config-data\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.235797 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6f6674bb4b-gp2wp" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.240525 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcmgp\" (UniqueName: \"kubernetes.io/projected/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-kube-api-access-vcmgp\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.240920 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-scripts\") pod \"ceilometer-0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.314751 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.682211 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9eb0de84-58b0-45c0-9aff-7b2a82d73079" path="/var/lib/kubelet/pods/9eb0de84-58b0-45c0-9aff-7b2a82d73079/volumes" Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.763147 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:27 crc kubenswrapper[4970]: E0930 10:04:27.778279 4970 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/7dfd0183fd5f1dc1c1206a4f2db9361fa72276f115bab37f76949d2f9c685e65/diff" to get inode usage: stat /var/lib/containers/storage/overlay/7dfd0183fd5f1dc1c1206a4f2db9361fa72276f115bab37f76949d2f9c685e65/diff: no such file or directory, extraDiskErr: Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.833806 4970 generic.go:334] "Generic (PLEG): container finished" podID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerID="a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4" exitCode=143 Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.833888 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4048eb6d-7c40-4a50-9fba-d253cb710ee6","Type":"ContainerDied","Data":"a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4"} Sep 30 10:04:27 crc kubenswrapper[4970]: I0930 10:04:27.836434 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerStarted","Data":"eb43ab0ecffb1b82db717a176db57e94ad8a48dd79bb332c876ffd1aa8214a79"} Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.220665 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.259887 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.340673 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4ds2\" (UniqueName: \"kubernetes.io/projected/904d5a82-7c31-44b2-b222-58009f2f53ea-kube-api-access-l4ds2\") pod \"904d5a82-7c31-44b2-b222-58009f2f53ea\" (UID: \"904d5a82-7c31-44b2-b222-58009f2f53ea\") " Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.346190 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/904d5a82-7c31-44b2-b222-58009f2f53ea-kube-api-access-l4ds2" (OuterVolumeSpecName: "kube-api-access-l4ds2") pod "904d5a82-7c31-44b2-b222-58009f2f53ea" (UID: "904d5a82-7c31-44b2-b222-58009f2f53ea"). InnerVolumeSpecName "kube-api-access-l4ds2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.442623 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4ds2\" (UniqueName: \"kubernetes.io/projected/904d5a82-7c31-44b2-b222-58009f2f53ea-kube-api-access-l4ds2\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.635683 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.824923 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": read tcp 10.217.0.2:53210->10.217.0.155:9292: read: connection reset by peer" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.825440 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": read tcp 10.217.0.2:53214->10.217.0.155:9292: read: connection reset by peer" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.852358 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerStarted","Data":"bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d"} Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.854309 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-03f9-account-create-wj49d" event={"ID":"904d5a82-7c31-44b2-b222-58009f2f53ea","Type":"ContainerDied","Data":"268cad65bdc83938c70efcbcd41f06269e19c61ec6354dd8356d50ee40604f7a"} Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.854355 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="268cad65bdc83938c70efcbcd41f06269e19c61ec6354dd8356d50ee40604f7a" Sep 30 10:04:28 crc kubenswrapper[4970]: I0930 10:04:28.854354 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-03f9-account-create-wj49d" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.305392 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.372225 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-logs\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.372801 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/ce89199d-435e-4383-a28f-c6326ec1f954-kube-api-access-pp9k9\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.372858 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.372971 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-config-data\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.373029 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-public-tls-certs\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.373134 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-combined-ca-bundle\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.373224 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-httpd-run\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.378093 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-logs" (OuterVolumeSpecName: "logs") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.379585 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-scripts\") pod \"ce89199d-435e-4383-a28f-c6326ec1f954\" (UID: \"ce89199d-435e-4383-a28f-c6326ec1f954\") " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.380011 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.379459 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.383953 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce89199d-435e-4383-a28f-c6326ec1f954-kube-api-access-pp9k9" (OuterVolumeSpecName: "kube-api-access-pp9k9") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "kube-api-access-pp9k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.384879 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.384912 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ce89199d-435e-4383-a28f-c6326ec1f954-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.384928 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp9k9\" (UniqueName: \"kubernetes.io/projected/ce89199d-435e-4383-a28f-c6326ec1f954-kube-api-access-pp9k9\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.385100 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.402669 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-scripts" (OuterVolumeSpecName: "scripts") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.420033 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.440232 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.471011 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-config-data" (OuterVolumeSpecName: "config-data") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.487280 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.487316 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.487327 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.487336 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.497662 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ce89199d-435e-4383-a28f-c6326ec1f954" (UID: "ce89199d-435e-4383-a28f-c6326ec1f954"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.591524 4970 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce89199d-435e-4383-a28f-c6326ec1f954-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.866963 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerStarted","Data":"8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af"} Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.870356 4970 generic.go:334] "Generic (PLEG): container finished" podID="ce89199d-435e-4383-a28f-c6326ec1f954" containerID="ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5" exitCode=0 Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.870392 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce89199d-435e-4383-a28f-c6326ec1f954","Type":"ContainerDied","Data":"ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5"} Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.870447 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ce89199d-435e-4383-a28f-c6326ec1f954","Type":"ContainerDied","Data":"aa5fc410a14c6f3243e29d5ef7d438833c9edd85aed04a3f54093e2efaf917c5"} Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.870470 4970 scope.go:117] "RemoveContainer" containerID="ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.870468 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.900225 4970 scope.go:117] "RemoveContainer" containerID="0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.902208 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.920331 4970 scope.go:117] "RemoveContainer" containerID="ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5" Sep 30 10:04:29 crc kubenswrapper[4970]: E0930 10:04:29.922288 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5\": container with ID starting with ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5 not found: ID does not exist" containerID="ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.922356 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5"} err="failed to get container status \"ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5\": rpc error: code = NotFound desc = could not find container \"ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5\": container with ID starting with ef6e9915997791b6cc71d44c9dcc8f70ee0eb2c0a6d4657ecdd38f3b63de64f5 not found: ID does not exist" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.922399 4970 scope.go:117] "RemoveContainer" containerID="0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910" Sep 30 10:04:29 crc kubenswrapper[4970]: E0930 10:04:29.923296 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910\": container with ID starting with 0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910 not found: ID does not exist" containerID="0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.923347 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910"} err="failed to get container status \"0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910\": rpc error: code = NotFound desc = could not find container \"0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910\": container with ID starting with 0753ec2c7c7eecc50be39b0c8affa407808c8e944552fa377bc9ec50f4e80910 not found: ID does not exist" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.934954 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959060 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:04:29 crc kubenswrapper[4970]: E0930 10:04:29.959539 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-log" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959555 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-log" Sep 30 10:04:29 crc kubenswrapper[4970]: E0930 10:04:29.959564 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="904d5a82-7c31-44b2-b222-58009f2f53ea" containerName="mariadb-account-create" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959571 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="904d5a82-7c31-44b2-b222-58009f2f53ea" containerName="mariadb-account-create" Sep 30 10:04:29 crc kubenswrapper[4970]: E0930 10:04:29.959592 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-httpd" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959599 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-httpd" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959817 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-httpd" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959835 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="904d5a82-7c31-44b2-b222-58009f2f53ea" containerName="mariadb-account-create" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.959852 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" containerName="glance-log" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.960960 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.964494 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.964736 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Sep 30 10:04:29 crc kubenswrapper[4970]: I0930 10:04:29.967356 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.105639 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-scripts\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.105757 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.105795 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv6p2\" (UniqueName: \"kubernetes.io/projected/e691ead3-4698-47b1-9ea4-b63f8e649a34-kube-api-access-zv6p2\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.105832 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.105863 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e691ead3-4698-47b1-9ea4-b63f8e649a34-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.105879 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e691ead3-4698-47b1-9ea4-b63f8e649a34-logs\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.106101 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-config-data\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.106279 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207618 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207670 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zv6p2\" (UniqueName: \"kubernetes.io/projected/e691ead3-4698-47b1-9ea4-b63f8e649a34-kube-api-access-zv6p2\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207710 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207742 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e691ead3-4698-47b1-9ea4-b63f8e649a34-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e691ead3-4698-47b1-9ea4-b63f8e649a34-logs\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207779 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-config-data\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207808 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.207866 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-scripts\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.208868 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e691ead3-4698-47b1-9ea4-b63f8e649a34-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.209181 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.211804 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-scripts\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.212764 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e691ead3-4698-47b1-9ea4-b63f8e649a34-logs\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.219730 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.219739 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-config-data\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.219797 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e691ead3-4698-47b1-9ea4-b63f8e649a34-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.239627 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv6p2\" (UniqueName: \"kubernetes.io/projected/e691ead3-4698-47b1-9ea4-b63f8e649a34-kube-api-access-zv6p2\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.248337 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"e691ead3-4698-47b1-9ea4-b63f8e649a34\") " pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.283430 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.634424 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.725732 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-internal-tls-certs\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.725804 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-scripts\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.725830 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.725911 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rnkj\" (UniqueName: \"kubernetes.io/projected/4048eb6d-7c40-4a50-9fba-d253cb710ee6-kube-api-access-7rnkj\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.725935 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-combined-ca-bundle\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.726142 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-logs\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.726160 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-config-data\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.726184 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-httpd-run\") pod \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\" (UID: \"4048eb6d-7c40-4a50-9fba-d253cb710ee6\") " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.727274 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.728584 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-logs" (OuterVolumeSpecName: "logs") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.738351 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-scripts" (OuterVolumeSpecName: "scripts") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.738566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4048eb6d-7c40-4a50-9fba-d253cb710ee6-kube-api-access-7rnkj" (OuterVolumeSpecName: "kube-api-access-7rnkj") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "kube-api-access-7rnkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.741514 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.778318 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.809733 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.813531 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-config-data" (OuterVolumeSpecName: "config-data") pod "4048eb6d-7c40-4a50-9fba-d253cb710ee6" (UID: "4048eb6d-7c40-4a50-9fba-d253cb710ee6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831219 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831256 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831269 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4048eb6d-7c40-4a50-9fba-d253cb710ee6-httpd-run\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831282 4970 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831297 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831329 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831364 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rnkj\" (UniqueName: \"kubernetes.io/projected/4048eb6d-7c40-4a50-9fba-d253cb710ee6-kube-api-access-7rnkj\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.831379 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4048eb6d-7c40-4a50-9fba-d253cb710ee6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.850841 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.886768 4970 generic.go:334] "Generic (PLEG): container finished" podID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerID="9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374" exitCode=0 Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.886821 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4048eb6d-7c40-4a50-9fba-d253cb710ee6","Type":"ContainerDied","Data":"9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374"} Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.886873 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.886882 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4048eb6d-7c40-4a50-9fba-d253cb710ee6","Type":"ContainerDied","Data":"03dc2c4ac4554ac00d041777561be6390bff9473b6f0891a6bb5640dee1b5bf0"} Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.886898 4970 scope.go:117] "RemoveContainer" containerID="9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.889618 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerStarted","Data":"4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75"} Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.912205 4970 scope.go:117] "RemoveContainer" containerID="a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.930859 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.932209 4970 scope.go:117] "RemoveContainer" containerID="9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374" Sep 30 10:04:30 crc kubenswrapper[4970]: E0930 10:04:30.932589 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374\": container with ID starting with 9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374 not found: ID does not exist" containerID="9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.932630 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374"} err="failed to get container status \"9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374\": rpc error: code = NotFound desc = could not find container \"9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374\": container with ID starting with 9103d4f8892bbdbb20e156d2fcfff138ffb1b90528f5e67163fb37f3f9855374 not found: ID does not exist" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.932659 4970 scope.go:117] "RemoveContainer" containerID="a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.932846 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:30 crc kubenswrapper[4970]: E0930 10:04:30.933113 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4\": container with ID starting with a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4 not found: ID does not exist" containerID="a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.933149 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4"} err="failed to get container status \"a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4\": rpc error: code = NotFound desc = could not find container \"a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4\": container with ID starting with a951047125f40610e0e85243a46b65a22921e513d1f815d16bc7e292138c99a4 not found: ID does not exist" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.939646 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.953390 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:04:30 crc kubenswrapper[4970]: E0930 10:04:30.953905 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-log" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.953936 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-log" Sep 30 10:04:30 crc kubenswrapper[4970]: E0930 10:04:30.953961 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-httpd" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.953969 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-httpd" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.954206 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-log" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.954235 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" containerName="glance-httpd" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.955538 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.958922 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.959245 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Sep 30 10:04:30 crc kubenswrapper[4970]: I0930 10:04:30.974946 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.031880 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.034775 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.034849 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c5f9e-ca1b-4724-960f-a376abda6ee2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.034884 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.034936 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.034980 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.035058 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsgth\" (UniqueName: \"kubernetes.io/projected/354c5f9e-ca1b-4724-960f-a376abda6ee2-kube-api-access-gsgth\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.035105 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c5f9e-ca1b-4724-960f-a376abda6ee2-logs\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.035193 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.137203 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c5f9e-ca1b-4724-960f-a376abda6ee2-logs\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.137586 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.137629 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.137658 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c5f9e-ca1b-4724-960f-a376abda6ee2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.137687 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.138116 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.138253 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/354c5f9e-ca1b-4724-960f-a376abda6ee2-logs\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.138315 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/354c5f9e-ca1b-4724-960f-a376abda6ee2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.141854 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.141940 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.142061 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsgth\" (UniqueName: \"kubernetes.io/projected/354c5f9e-ca1b-4724-960f-a376abda6ee2-kube-api-access-gsgth\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.143655 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.144358 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.151110 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.153658 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/354c5f9e-ca1b-4724-960f-a376abda6ee2-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.164206 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsgth\" (UniqueName: \"kubernetes.io/projected/354c5f9e-ca1b-4724-960f-a376abda6ee2-kube-api-access-gsgth\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.208337 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-internal-api-0\" (UID: \"354c5f9e-ca1b-4724-960f-a376abda6ee2\") " pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.281404 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.700030 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4048eb6d-7c40-4a50-9fba-d253cb710ee6" path="/var/lib/kubelet/pods/4048eb6d-7c40-4a50-9fba-d253cb710ee6/volumes" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.701118 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce89199d-435e-4383-a28f-c6326ec1f954" path="/var/lib/kubelet/pods/ce89199d-435e-4383-a28f-c6326ec1f954/volumes" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.816388 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-65fc8b84cc-9lm9w" Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.893732 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f69b67d68-jtzdb"] Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.894016 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f69b67d68-jtzdb" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-api" containerID="cri-o://612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73" gracePeriod=30 Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.894124 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6f69b67d68-jtzdb" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-httpd" containerID="cri-o://0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa" gracePeriod=30 Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.936061 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e691ead3-4698-47b1-9ea4-b63f8e649a34","Type":"ContainerStarted","Data":"0d0ce12855d2cae760b9e4feaa384c94ca557ff419d505d7f288a836dec2577e"} Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.936102 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e691ead3-4698-47b1-9ea4-b63f8e649a34","Type":"ContainerStarted","Data":"62fc8535ff5a304cc8a872d83d356ea30c516edb642d28da3dd4120c788f9908"} Sep 30 10:04:31 crc kubenswrapper[4970]: I0930 10:04:31.943677 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.951723 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"354c5f9e-ca1b-4724-960f-a376abda6ee2","Type":"ContainerStarted","Data":"3bb584d6c2db54b896d81633cbc63472e8b1ad7f4bcdbd549bdd255138874405"} Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.952195 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"354c5f9e-ca1b-4724-960f-a376abda6ee2","Type":"ContainerStarted","Data":"e74c0a1a4ad28f0b31347215d3f3a0bcb5c35ac70e73847aad2a3716d96b9c53"} Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.959979 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerStarted","Data":"c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193"} Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.960265 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-central-agent" containerID="cri-o://bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d" gracePeriod=30 Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.960444 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="sg-core" containerID="cri-o://4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75" gracePeriod=30 Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.960496 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.960521 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-notification-agent" containerID="cri-o://8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af" gracePeriod=30 Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.960500 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="proxy-httpd" containerID="cri-o://c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193" gracePeriod=30 Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.970175 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e691ead3-4698-47b1-9ea4-b63f8e649a34","Type":"ContainerStarted","Data":"209448dbbbd1c495ddc2b8fcce455411a7bb4954f058b6714e986dba0fa5875e"} Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.977364 4970 generic.go:334] "Generic (PLEG): container finished" podID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerID="0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa" exitCode=0 Sep 30 10:04:32 crc kubenswrapper[4970]: I0930 10:04:32.977424 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f69b67d68-jtzdb" event={"ID":"03969bb8-1309-49b6-910c-de58a7d7b3d4","Type":"ContainerDied","Data":"0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa"} Sep 30 10:04:33 crc kubenswrapper[4970]: I0930 10:04:33.009435 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.206805091 podStartE2EDuration="7.009408219s" podCreationTimestamp="2025-09-30 10:04:26 +0000 UTC" firstStartedPulling="2025-09-30 10:04:27.766225607 +0000 UTC m=+1080.838076541" lastFinishedPulling="2025-09-30 10:04:31.568828735 +0000 UTC m=+1084.640679669" observedRunningTime="2025-09-30 10:04:33.002365032 +0000 UTC m=+1086.074215966" watchObservedRunningTime="2025-09-30 10:04:33.009408219 +0000 UTC m=+1086.081259153" Sep 30 10:04:33 crc kubenswrapper[4970]: I0930 10:04:33.994205 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"354c5f9e-ca1b-4724-960f-a376abda6ee2","Type":"ContainerStarted","Data":"6ee10bbed4f9267fd5f7723660d51ed977aadbc0a1598a63180394d5368ab085"} Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.012132 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerID="c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193" exitCode=0 Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.012479 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerID="4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75" exitCode=2 Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.012489 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerID="8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af" exitCode=0 Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.012549 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerDied","Data":"c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193"} Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.012577 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerDied","Data":"4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75"} Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.012587 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerDied","Data":"8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af"} Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.032411 4970 generic.go:334] "Generic (PLEG): container finished" podID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerID="09566df69d0b238347b6a02d51e9d164ccc3a9001a0cbb3925c87d0cac561a81" exitCode=137 Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.032625 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6674bb4b-gp2wp" event={"ID":"48bd5d72-868c-4d81-9d1f-6a03ba997169","Type":"ContainerDied","Data":"09566df69d0b238347b6a02d51e9d164ccc3a9001a0cbb3925c87d0cac561a81"} Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.032627 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.032607386 podStartE2EDuration="5.032607386s" podCreationTimestamp="2025-09-30 10:04:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:33.042838609 +0000 UTC m=+1086.114689543" watchObservedRunningTime="2025-09-30 10:04:34.032607386 +0000 UTC m=+1087.104458320" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.034597 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.034592219 podStartE2EDuration="4.034592219s" podCreationTimestamp="2025-09-30 10:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:34.02260301 +0000 UTC m=+1087.094453954" watchObservedRunningTime="2025-09-30 10:04:34.034592219 +0000 UTC m=+1087.106443153" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.227399 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.336824 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-tls-certs\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.336900 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-combined-ca-bundle\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.336941 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-config-data\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.337009 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqn89\" (UniqueName: \"kubernetes.io/projected/48bd5d72-868c-4d81-9d1f-6a03ba997169-kube-api-access-fqn89\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.337038 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-secret-key\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.337913 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-scripts\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.338028 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48bd5d72-868c-4d81-9d1f-6a03ba997169-logs\") pod \"48bd5d72-868c-4d81-9d1f-6a03ba997169\" (UID: \"48bd5d72-868c-4d81-9d1f-6a03ba997169\") " Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.339693 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48bd5d72-868c-4d81-9d1f-6a03ba997169-logs" (OuterVolumeSpecName: "logs") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.343807 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48bd5d72-868c-4d81-9d1f-6a03ba997169-kube-api-access-fqn89" (OuterVolumeSpecName: "kube-api-access-fqn89") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "kube-api-access-fqn89". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.343835 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.365478 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-config-data" (OuterVolumeSpecName: "config-data") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.378940 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-scripts" (OuterVolumeSpecName: "scripts") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.385774 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.411496 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "48bd5d72-868c-4d81-9d1f-6a03ba997169" (UID: "48bd5d72-868c-4d81-9d1f-6a03ba997169"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440152 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqn89\" (UniqueName: \"kubernetes.io/projected/48bd5d72-868c-4d81-9d1f-6a03ba997169-kube-api-access-fqn89\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440194 4970 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440204 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440213 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/48bd5d72-868c-4d81-9d1f-6a03ba997169-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440222 4970 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440232 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48bd5d72-868c-4d81-9d1f-6a03ba997169-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:34 crc kubenswrapper[4970]: I0930 10:04:34.440241 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/48bd5d72-868c-4d81-9d1f-6a03ba997169-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.050559 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f6674bb4b-gp2wp" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.050764 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f6674bb4b-gp2wp" event={"ID":"48bd5d72-868c-4d81-9d1f-6a03ba997169","Type":"ContainerDied","Data":"cf4d95d501312c144c9a2b103bdf725d132bc371b34fa99b2a6e1cf4ec4619ca"} Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.051781 4970 scope.go:117] "RemoveContainer" containerID="d403e368d182209dd7a414661db3f68d40f32e6e8d7e33e45fb6614d5c1c8d68" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.116303 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f6674bb4b-gp2wp"] Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.130040 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f6674bb4b-gp2wp"] Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.245801 4970 scope.go:117] "RemoveContainer" containerID="09566df69d0b238347b6a02d51e9d164ccc3a9001a0cbb3925c87d0cac561a81" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.679602 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" path="/var/lib/kubelet/pods/48bd5d72-868c-4d81-9d1f-6a03ba997169/volumes" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.761934 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-7fb2-account-create-t9tmn"] Sep 30 10:04:35 crc kubenswrapper[4970]: E0930 10:04:35.762358 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.762378 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" Sep 30 10:04:35 crc kubenswrapper[4970]: E0930 10:04:35.762405 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon-log" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.762412 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon-log" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.762564 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.762601 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="48bd5d72-868c-4d81-9d1f-6a03ba997169" containerName="horizon-log" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.763348 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.765474 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.773606 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7fb2-account-create-t9tmn"] Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.864506 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788kg\" (UniqueName: \"kubernetes.io/projected/59187562-fc0e-49ac-b22d-81abc6850bd7-kube-api-access-788kg\") pod \"nova-cell0-7fb2-account-create-t9tmn\" (UID: \"59187562-fc0e-49ac-b22d-81abc6850bd7\") " pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.955844 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-91c3-account-create-f6wvn"] Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.957342 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.959171 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.966402 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-788kg\" (UniqueName: \"kubernetes.io/projected/59187562-fc0e-49ac-b22d-81abc6850bd7-kube-api-access-788kg\") pod \"nova-cell0-7fb2-account-create-t9tmn\" (UID: \"59187562-fc0e-49ac-b22d-81abc6850bd7\") " pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.974940 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-91c3-account-create-f6wvn"] Sep 30 10:04:35 crc kubenswrapper[4970]: I0930 10:04:35.997266 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-788kg\" (UniqueName: \"kubernetes.io/projected/59187562-fc0e-49ac-b22d-81abc6850bd7-kube-api-access-788kg\") pod \"nova-cell0-7fb2-account-create-t9tmn\" (UID: \"59187562-fc0e-49ac-b22d-81abc6850bd7\") " pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.068330 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dglfz\" (UniqueName: \"kubernetes.io/projected/f6c12a73-2619-429d-9ef3-2a7a25cc0906-kube-api-access-dglfz\") pod \"nova-cell1-91c3-account-create-f6wvn\" (UID: \"f6c12a73-2619-429d-9ef3-2a7a25cc0906\") " pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.090022 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.169507 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dglfz\" (UniqueName: \"kubernetes.io/projected/f6c12a73-2619-429d-9ef3-2a7a25cc0906-kube-api-access-dglfz\") pod \"nova-cell1-91c3-account-create-f6wvn\" (UID: \"f6c12a73-2619-429d-9ef3-2a7a25cc0906\") " pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.191868 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dglfz\" (UniqueName: \"kubernetes.io/projected/f6c12a73-2619-429d-9ef3-2a7a25cc0906-kube-api-access-dglfz\") pod \"nova-cell1-91c3-account-create-f6wvn\" (UID: \"f6c12a73-2619-429d-9ef3-2a7a25cc0906\") " pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.271343 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.602854 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-7fb2-account-create-t9tmn"] Sep 30 10:04:36 crc kubenswrapper[4970]: W0930 10:04:36.603928 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59187562_fc0e_49ac_b22d_81abc6850bd7.slice/crio-aead8ab6ccd62022ec11aadd91e854b385ddf7c16e6020e087c834d7943a6d0a WatchSource:0}: Error finding container aead8ab6ccd62022ec11aadd91e854b385ddf7c16e6020e087c834d7943a6d0a: Status 404 returned error can't find the container with id aead8ab6ccd62022ec11aadd91e854b385ddf7c16e6020e087c834d7943a6d0a Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.662113 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.790245 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-config\") pod \"03969bb8-1309-49b6-910c-de58a7d7b3d4\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.790344 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwgx5\" (UniqueName: \"kubernetes.io/projected/03969bb8-1309-49b6-910c-de58a7d7b3d4-kube-api-access-jwgx5\") pod \"03969bb8-1309-49b6-910c-de58a7d7b3d4\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.790450 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-combined-ca-bundle\") pod \"03969bb8-1309-49b6-910c-de58a7d7b3d4\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.790682 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-httpd-config\") pod \"03969bb8-1309-49b6-910c-de58a7d7b3d4\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.790715 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-ovndb-tls-certs\") pod \"03969bb8-1309-49b6-910c-de58a7d7b3d4\" (UID: \"03969bb8-1309-49b6-910c-de58a7d7b3d4\") " Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.807161 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "03969bb8-1309-49b6-910c-de58a7d7b3d4" (UID: "03969bb8-1309-49b6-910c-de58a7d7b3d4"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.807191 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03969bb8-1309-49b6-910c-de58a7d7b3d4-kube-api-access-jwgx5" (OuterVolumeSpecName: "kube-api-access-jwgx5") pod "03969bb8-1309-49b6-910c-de58a7d7b3d4" (UID: "03969bb8-1309-49b6-910c-de58a7d7b3d4"). InnerVolumeSpecName "kube-api-access-jwgx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.842169 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-91c3-account-create-f6wvn"] Sep 30 10:04:36 crc kubenswrapper[4970]: W0930 10:04:36.858691 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6c12a73_2619_429d_9ef3_2a7a25cc0906.slice/crio-1d9aa9c907f9d598c571f7a739f6e712fe1e2774ce8820168d739fcdd7615c83 WatchSource:0}: Error finding container 1d9aa9c907f9d598c571f7a739f6e712fe1e2774ce8820168d739fcdd7615c83: Status 404 returned error can't find the container with id 1d9aa9c907f9d598c571f7a739f6e712fe1e2774ce8820168d739fcdd7615c83 Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.893368 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwgx5\" (UniqueName: \"kubernetes.io/projected/03969bb8-1309-49b6-910c-de58a7d7b3d4-kube-api-access-jwgx5\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.893406 4970 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-httpd-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.916203 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-config" (OuterVolumeSpecName: "config") pod "03969bb8-1309-49b6-910c-de58a7d7b3d4" (UID: "03969bb8-1309-49b6-910c-de58a7d7b3d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.929273 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "03969bb8-1309-49b6-910c-de58a7d7b3d4" (UID: "03969bb8-1309-49b6-910c-de58a7d7b3d4"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.937619 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03969bb8-1309-49b6-910c-de58a7d7b3d4" (UID: "03969bb8-1309-49b6-910c-de58a7d7b3d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.995367 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.995405 4970 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:36 crc kubenswrapper[4970]: I0930 10:04:36.995416 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/03969bb8-1309-49b6-910c-de58a7d7b3d4-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.075870 4970 generic.go:334] "Generic (PLEG): container finished" podID="59187562-fc0e-49ac-b22d-81abc6850bd7" containerID="4b7f31b37957c7407e7f3cf148c36b560bf5432f0397f407b716f0c16365645e" exitCode=0 Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.075940 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" event={"ID":"59187562-fc0e-49ac-b22d-81abc6850bd7","Type":"ContainerDied","Data":"4b7f31b37957c7407e7f3cf148c36b560bf5432f0397f407b716f0c16365645e"} Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.076478 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" event={"ID":"59187562-fc0e-49ac-b22d-81abc6850bd7","Type":"ContainerStarted","Data":"aead8ab6ccd62022ec11aadd91e854b385ddf7c16e6020e087c834d7943a6d0a"} Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.081040 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-91c3-account-create-f6wvn" event={"ID":"f6c12a73-2619-429d-9ef3-2a7a25cc0906","Type":"ContainerStarted","Data":"62220ee2152481ebb859c99856a6bd8932a403e9d475ea15239dcc19ffbb9884"} Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.081086 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-91c3-account-create-f6wvn" event={"ID":"f6c12a73-2619-429d-9ef3-2a7a25cc0906","Type":"ContainerStarted","Data":"1d9aa9c907f9d598c571f7a739f6e712fe1e2774ce8820168d739fcdd7615c83"} Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.085628 4970 generic.go:334] "Generic (PLEG): container finished" podID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerID="612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73" exitCode=0 Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.085684 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f69b67d68-jtzdb" event={"ID":"03969bb8-1309-49b6-910c-de58a7d7b3d4","Type":"ContainerDied","Data":"612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73"} Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.085735 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6f69b67d68-jtzdb" event={"ID":"03969bb8-1309-49b6-910c-de58a7d7b3d4","Type":"ContainerDied","Data":"8e7735a9020a5372c8b30c13631ec410b4873522a8215b350db8f3a3f755ca16"} Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.085757 4970 scope.go:117] "RemoveContainer" containerID="0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.085941 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6f69b67d68-jtzdb" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.125173 4970 scope.go:117] "RemoveContainer" containerID="612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.136660 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-91c3-account-create-f6wvn" podStartSLOduration=2.136642616 podStartE2EDuration="2.136642616s" podCreationTimestamp="2025-09-30 10:04:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:04:37.117064915 +0000 UTC m=+1090.188915849" watchObservedRunningTime="2025-09-30 10:04:37.136642616 +0000 UTC m=+1090.208493550" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.143336 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6f69b67d68-jtzdb"] Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.150183 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6f69b67d68-jtzdb"] Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.184614 4970 scope.go:117] "RemoveContainer" containerID="0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa" Sep 30 10:04:37 crc kubenswrapper[4970]: E0930 10:04:37.185165 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa\": container with ID starting with 0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa not found: ID does not exist" containerID="0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.185215 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa"} err="failed to get container status \"0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa\": rpc error: code = NotFound desc = could not find container \"0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa\": container with ID starting with 0dfd036b9f4e0b38faa48b4b60801e2ef16b4a8ada357b90610dd454e3b094fa not found: ID does not exist" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.185251 4970 scope.go:117] "RemoveContainer" containerID="612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73" Sep 30 10:04:37 crc kubenswrapper[4970]: E0930 10:04:37.185657 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73\": container with ID starting with 612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73 not found: ID does not exist" containerID="612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.185687 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73"} err="failed to get container status \"612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73\": rpc error: code = NotFound desc = could not find container \"612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73\": container with ID starting with 612132dff71bafc21cdffa2448197caa3ce8e2ffce4c9e687f806fd057289a73 not found: ID does not exist" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.682931 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" path="/var/lib/kubelet/pods/03969bb8-1309-49b6-910c-de58a7d7b3d4/volumes" Sep 30 10:04:37 crc kubenswrapper[4970]: I0930 10:04:37.964848 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.014596 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-scripts\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.014642 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-combined-ca-bundle\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.014734 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-run-httpd\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.014767 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-sg-core-conf-yaml\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.014857 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcmgp\" (UniqueName: \"kubernetes.io/projected/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-kube-api-access-vcmgp\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.014886 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-config-data\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.015040 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-log-httpd\") pod \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\" (UID: \"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.015548 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.016219 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.020257 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-kube-api-access-vcmgp" (OuterVolumeSpecName: "kube-api-access-vcmgp") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "kube-api-access-vcmgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.041699 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-scripts" (OuterVolumeSpecName: "scripts") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.053012 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.100027 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerDied","Data":"bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d"} Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.100089 4970 scope.go:117] "RemoveContainer" containerID="c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.100175 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.101072 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerID="bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d" exitCode=0 Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.101232 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6e9ce363-bf0f-41d0-927d-cc923c3e4bc0","Type":"ContainerDied","Data":"eb43ab0ecffb1b82db717a176db57e94ad8a48dd79bb332c876ffd1aa8214a79"} Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.103909 4970 generic.go:334] "Generic (PLEG): container finished" podID="f6c12a73-2619-429d-9ef3-2a7a25cc0906" containerID="62220ee2152481ebb859c99856a6bd8932a403e9d475ea15239dcc19ffbb9884" exitCode=0 Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.104181 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-91c3-account-create-f6wvn" event={"ID":"f6c12a73-2619-429d-9ef3-2a7a25cc0906","Type":"ContainerDied","Data":"62220ee2152481ebb859c99856a6bd8932a403e9d475ea15239dcc19ffbb9884"} Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.117583 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.117626 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.117637 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.117650 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcmgp\" (UniqueName: \"kubernetes.io/projected/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-kube-api-access-vcmgp\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.117662 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.138150 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.141830 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-config-data" (OuterVolumeSpecName: "config-data") pod "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" (UID: "6e9ce363-bf0f-41d0-927d-cc923c3e4bc0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.212804 4970 scope.go:117] "RemoveContainer" containerID="4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.224363 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.224397 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.250854 4970 scope.go:117] "RemoveContainer" containerID="8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.280024 4970 scope.go:117] "RemoveContainer" containerID="bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.312339 4970 scope.go:117] "RemoveContainer" containerID="c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.313002 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193\": container with ID starting with c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193 not found: ID does not exist" containerID="c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.313036 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193"} err="failed to get container status \"c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193\": rpc error: code = NotFound desc = could not find container \"c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193\": container with ID starting with c46ac416f054b1e446f10c32672308a4ddb14c9978e31952e61ad1aa3299c193 not found: ID does not exist" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.313064 4970 scope.go:117] "RemoveContainer" containerID="4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.314569 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75\": container with ID starting with 4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75 not found: ID does not exist" containerID="4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.314620 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75"} err="failed to get container status \"4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75\": rpc error: code = NotFound desc = could not find container \"4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75\": container with ID starting with 4484f2bfe1682295abc1723144db12de1e8d8fdd1a35432cbb6ae7859d72af75 not found: ID does not exist" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.314648 4970 scope.go:117] "RemoveContainer" containerID="8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.314951 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af\": container with ID starting with 8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af not found: ID does not exist" containerID="8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.314972 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af"} err="failed to get container status \"8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af\": rpc error: code = NotFound desc = could not find container \"8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af\": container with ID starting with 8d6ee41130c5c7a7238c5d4be6568eacdaee606f04fd52952f6ce1597c45c8af not found: ID does not exist" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.315003 4970 scope.go:117] "RemoveContainer" containerID="bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.315130 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d\": container with ID starting with bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d not found: ID does not exist" containerID="bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.315144 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d"} err="failed to get container status \"bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d\": rpc error: code = NotFound desc = could not find container \"bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d\": container with ID starting with bc40567c29c31a2742217c2ea3af30ec234ee715faa452e8061fe2c643bb793d not found: ID does not exist" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.417611 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.443794 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.473720 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.507881 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508477 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-central-agent" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508501 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-central-agent" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508522 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-api" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508532 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-api" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508547 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-notification-agent" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508556 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-notification-agent" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508578 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="sg-core" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508589 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="sg-core" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508606 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="proxy-httpd" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508614 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="proxy-httpd" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508631 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59187562-fc0e-49ac-b22d-81abc6850bd7" containerName="mariadb-account-create" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508639 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="59187562-fc0e-49ac-b22d-81abc6850bd7" containerName="mariadb-account-create" Sep 30 10:04:38 crc kubenswrapper[4970]: E0930 10:04:38.508679 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-httpd" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508688 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-httpd" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508915 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-notification-agent" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508943 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="sg-core" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508963 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-httpd" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.508977 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="proxy-httpd" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.509017 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" containerName="ceilometer-central-agent" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.509033 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="59187562-fc0e-49ac-b22d-81abc6850bd7" containerName="mariadb-account-create" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.509051 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="03969bb8-1309-49b6-910c-de58a7d7b3d4" containerName="neutron-api" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.511237 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.518686 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.518791 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.518947 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.530438 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-788kg\" (UniqueName: \"kubernetes.io/projected/59187562-fc0e-49ac-b22d-81abc6850bd7-kube-api-access-788kg\") pod \"59187562-fc0e-49ac-b22d-81abc6850bd7\" (UID: \"59187562-fc0e-49ac-b22d-81abc6850bd7\") " Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.530901 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-scripts\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.530975 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6cxs\" (UniqueName: \"kubernetes.io/projected/f28e5971-ae4a-4985-8f44-f5746ef03dbc-kube-api-access-l6cxs\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.531022 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.531063 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.531158 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.532006 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.532064 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-config-data\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.536490 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59187562-fc0e-49ac-b22d-81abc6850bd7-kube-api-access-788kg" (OuterVolumeSpecName: "kube-api-access-788kg") pod "59187562-fc0e-49ac-b22d-81abc6850bd7" (UID: "59187562-fc0e-49ac-b22d-81abc6850bd7"). InnerVolumeSpecName "kube-api-access-788kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633076 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633173 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633231 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-config-data\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633300 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-scripts\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633343 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6cxs\" (UniqueName: \"kubernetes.io/projected/f28e5971-ae4a-4985-8f44-f5746ef03dbc-kube-api-access-l6cxs\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633362 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633383 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633461 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-788kg\" (UniqueName: \"kubernetes.io/projected/59187562-fc0e-49ac-b22d-81abc6850bd7-kube-api-access-788kg\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633538 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-run-httpd\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.633715 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-log-httpd\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.637234 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.638373 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-scripts\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.638638 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-config-data\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.638700 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.650073 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6cxs\" (UniqueName: \"kubernetes.io/projected/f28e5971-ae4a-4985-8f44-f5746ef03dbc-kube-api-access-l6cxs\") pod \"ceilometer-0\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " pod="openstack/ceilometer-0" Sep 30 10:04:38 crc kubenswrapper[4970]: I0930 10:04:38.837165 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.092777 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:39 crc kubenswrapper[4970]: W0930 10:04:39.100909 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf28e5971_ae4a_4985_8f44_f5746ef03dbc.slice/crio-8cb36e0111d79c02ea47721c6336bcc9e90303d0181a07257c513ba7dd06f8ac WatchSource:0}: Error finding container 8cb36e0111d79c02ea47721c6336bcc9e90303d0181a07257c513ba7dd06f8ac: Status 404 returned error can't find the container with id 8cb36e0111d79c02ea47721c6336bcc9e90303d0181a07257c513ba7dd06f8ac Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.141791 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" event={"ID":"59187562-fc0e-49ac-b22d-81abc6850bd7","Type":"ContainerDied","Data":"aead8ab6ccd62022ec11aadd91e854b385ddf7c16e6020e087c834d7943a6d0a"} Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.141842 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aead8ab6ccd62022ec11aadd91e854b385ddf7c16e6020e087c834d7943a6d0a" Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.141931 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-7fb2-account-create-t9tmn" Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.148068 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerStarted","Data":"8cb36e0111d79c02ea47721c6336bcc9e90303d0181a07257c513ba7dd06f8ac"} Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.428000 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.516813 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.661337 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dglfz\" (UniqueName: \"kubernetes.io/projected/f6c12a73-2619-429d-9ef3-2a7a25cc0906-kube-api-access-dglfz\") pod \"f6c12a73-2619-429d-9ef3-2a7a25cc0906\" (UID: \"f6c12a73-2619-429d-9ef3-2a7a25cc0906\") " Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.666364 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c12a73-2619-429d-9ef3-2a7a25cc0906-kube-api-access-dglfz" (OuterVolumeSpecName: "kube-api-access-dglfz") pod "f6c12a73-2619-429d-9ef3-2a7a25cc0906" (UID: "f6c12a73-2619-429d-9ef3-2a7a25cc0906"). InnerVolumeSpecName "kube-api-access-dglfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.684644 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e9ce363-bf0f-41d0-927d-cc923c3e4bc0" path="/var/lib/kubelet/pods/6e9ce363-bf0f-41d0-927d-cc923c3e4bc0/volumes" Sep 30 10:04:39 crc kubenswrapper[4970]: I0930 10:04:39.763849 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dglfz\" (UniqueName: \"kubernetes.io/projected/f6c12a73-2619-429d-9ef3-2a7a25cc0906-kube-api-access-dglfz\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.164657 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerStarted","Data":"14fbe4c486f90c0360dced539808d634dba0516324bd955021dea94075569e4c"} Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.166345 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-91c3-account-create-f6wvn" event={"ID":"f6c12a73-2619-429d-9ef3-2a7a25cc0906","Type":"ContainerDied","Data":"1d9aa9c907f9d598c571f7a739f6e712fe1e2774ce8820168d739fcdd7615c83"} Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.166393 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d9aa9c907f9d598c571f7a739f6e712fe1e2774ce8820168d739fcdd7615c83" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.166401 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-91c3-account-create-f6wvn" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.284614 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.284679 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.324012 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.330835 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.961883 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n2fjv"] Sep 30 10:04:40 crc kubenswrapper[4970]: E0930 10:04:40.962462 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6c12a73-2619-429d-9ef3-2a7a25cc0906" containerName="mariadb-account-create" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.962481 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6c12a73-2619-429d-9ef3-2a7a25cc0906" containerName="mariadb-account-create" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.962684 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6c12a73-2619-429d-9ef3-2a7a25cc0906" containerName="mariadb-account-create" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.963354 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.966620 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.967398 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.968946 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b589f" Sep 30 10:04:40 crc kubenswrapper[4970]: I0930 10:04:40.972019 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n2fjv"] Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.087792 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-config-data\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.088008 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qxdd\" (UniqueName: \"kubernetes.io/projected/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-kube-api-access-4qxdd\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.088054 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-scripts\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.088171 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.177413 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerStarted","Data":"fb6133d31f10d7990962b742e879440d606da30daab2d039257b2644a6eac5d4"} Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.178123 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.178252 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.189314 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-scripts\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.189375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.189469 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-config-data\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.189538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qxdd\" (UniqueName: \"kubernetes.io/projected/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-kube-api-access-4qxdd\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.194463 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.194609 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-scripts\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.199560 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-config-data\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.208898 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qxdd\" (UniqueName: \"kubernetes.io/projected/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-kube-api-access-4qxdd\") pod \"nova-cell0-conductor-db-sync-n2fjv\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.279174 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.282389 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.282521 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.323362 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.441710 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:41 crc kubenswrapper[4970]: W0930 10:04:41.787256 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83571c5f_f5d7_47e3_8e2c_a0be1f7f0f1a.slice/crio-7aa55c91b4ccf1f16a4e6cc70524c1486d09452d01f1e31d128b5308aa9d0005 WatchSource:0}: Error finding container 7aa55c91b4ccf1f16a4e6cc70524c1486d09452d01f1e31d128b5308aa9d0005: Status 404 returned error can't find the container with id 7aa55c91b4ccf1f16a4e6cc70524c1486d09452d01f1e31d128b5308aa9d0005 Sep 30 10:04:41 crc kubenswrapper[4970]: I0930 10:04:41.789690 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n2fjv"] Sep 30 10:04:42 crc kubenswrapper[4970]: I0930 10:04:42.186588 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" event={"ID":"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a","Type":"ContainerStarted","Data":"7aa55c91b4ccf1f16a4e6cc70524c1486d09452d01f1e31d128b5308aa9d0005"} Sep 30 10:04:42 crc kubenswrapper[4970]: I0930 10:04:42.188898 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerStarted","Data":"85fc4febe00ad94cadb028bc23ae487f57a71e26656d7b229c0ce7c0fa603c29"} Sep 30 10:04:42 crc kubenswrapper[4970]: I0930 10:04:42.189371 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:42 crc kubenswrapper[4970]: I0930 10:04:42.189405 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.223932 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.224242 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.225085 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-central-agent" containerID="cri-o://14fbe4c486f90c0360dced539808d634dba0516324bd955021dea94075569e4c" gracePeriod=30 Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.225302 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerStarted","Data":"3eeb6fe1108cf7da28b815de4efb828006f920939366d8125bcca7a4efa6309c"} Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.225357 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="sg-core" containerID="cri-o://85fc4febe00ad94cadb028bc23ae487f57a71e26656d7b229c0ce7c0fa603c29" gracePeriod=30 Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.225352 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="proxy-httpd" containerID="cri-o://3eeb6fe1108cf7da28b815de4efb828006f920939366d8125bcca7a4efa6309c" gracePeriod=30 Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.225604 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-notification-agent" containerID="cri-o://fb6133d31f10d7990962b742e879440d606da30daab2d039257b2644a6eac5d4" gracePeriod=30 Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.225746 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.262600 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.732332668 podStartE2EDuration="5.262582048s" podCreationTimestamp="2025-09-30 10:04:38 +0000 UTC" firstStartedPulling="2025-09-30 10:04:39.106067392 +0000 UTC m=+1092.177918326" lastFinishedPulling="2025-09-30 10:04:42.636316772 +0000 UTC m=+1095.708167706" observedRunningTime="2025-09-30 10:04:43.254197365 +0000 UTC m=+1096.326048299" watchObservedRunningTime="2025-09-30 10:04:43.262582048 +0000 UTC m=+1096.334432982" Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.776502 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 10:04:43 crc kubenswrapper[4970]: I0930 10:04:43.913745 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.242355 4970 generic.go:334] "Generic (PLEG): container finished" podID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerID="3eeb6fe1108cf7da28b815de4efb828006f920939366d8125bcca7a4efa6309c" exitCode=0 Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.242399 4970 generic.go:334] "Generic (PLEG): container finished" podID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerID="85fc4febe00ad94cadb028bc23ae487f57a71e26656d7b229c0ce7c0fa603c29" exitCode=2 Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.242408 4970 generic.go:334] "Generic (PLEG): container finished" podID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerID="fb6133d31f10d7990962b742e879440d606da30daab2d039257b2644a6eac5d4" exitCode=0 Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.242440 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerDied","Data":"3eeb6fe1108cf7da28b815de4efb828006f920939366d8125bcca7a4efa6309c"} Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.242504 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerDied","Data":"85fc4febe00ad94cadb028bc23ae487f57a71e26656d7b229c0ce7c0fa603c29"} Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.242517 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerDied","Data":"fb6133d31f10d7990962b742e879440d606da30daab2d039257b2644a6eac5d4"} Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.787661 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.788044 4970 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 30 10:04:44 crc kubenswrapper[4970]: I0930 10:04:44.791778 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Sep 30 10:04:50 crc kubenswrapper[4970]: I0930 10:04:50.317020 4970 generic.go:334] "Generic (PLEG): container finished" podID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerID="14fbe4c486f90c0360dced539808d634dba0516324bd955021dea94075569e4c" exitCode=0 Sep 30 10:04:50 crc kubenswrapper[4970]: I0930 10:04:50.317089 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerDied","Data":"14fbe4c486f90c0360dced539808d634dba0516324bd955021dea94075569e4c"} Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.103087 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224436 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-scripts\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224536 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-log-httpd\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224608 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-combined-ca-bundle\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224628 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6cxs\" (UniqueName: \"kubernetes.io/projected/f28e5971-ae4a-4985-8f44-f5746ef03dbc-kube-api-access-l6cxs\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224676 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-sg-core-conf-yaml\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224692 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-run-httpd\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.224733 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-config-data\") pod \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\" (UID: \"f28e5971-ae4a-4985-8f44-f5746ef03dbc\") " Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.226263 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.226379 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.232257 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-scripts" (OuterVolumeSpecName: "scripts") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.232337 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f28e5971-ae4a-4985-8f44-f5746ef03dbc-kube-api-access-l6cxs" (OuterVolumeSpecName: "kube-api-access-l6cxs") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "kube-api-access-l6cxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.254192 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.315169 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327390 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327423 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327436 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327450 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6cxs\" (UniqueName: \"kubernetes.io/projected/f28e5971-ae4a-4985-8f44-f5746ef03dbc-kube-api-access-l6cxs\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327462 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327473 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f28e5971-ae4a-4985-8f44-f5746ef03dbc-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.327832 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-config-data" (OuterVolumeSpecName: "config-data") pod "f28e5971-ae4a-4985-8f44-f5746ef03dbc" (UID: "f28e5971-ae4a-4985-8f44-f5746ef03dbc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.333079 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" event={"ID":"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a","Type":"ContainerStarted","Data":"1367a89aff58b12a8df8031425b319f2849c4ce4cb6336435e25b4c4c2467f7a"} Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.337449 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f28e5971-ae4a-4985-8f44-f5746ef03dbc","Type":"ContainerDied","Data":"8cb36e0111d79c02ea47721c6336bcc9e90303d0181a07257c513ba7dd06f8ac"} Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.337500 4970 scope.go:117] "RemoveContainer" containerID="3eeb6fe1108cf7da28b815de4efb828006f920939366d8125bcca7a4efa6309c" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.337629 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.350685 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" podStartSLOduration=2.309960927 podStartE2EDuration="11.350660393s" podCreationTimestamp="2025-09-30 10:04:40 +0000 UTC" firstStartedPulling="2025-09-30 10:04:41.791508922 +0000 UTC m=+1094.863359856" lastFinishedPulling="2025-09-30 10:04:50.832208378 +0000 UTC m=+1103.904059322" observedRunningTime="2025-09-30 10:04:51.348021851 +0000 UTC m=+1104.419872785" watchObservedRunningTime="2025-09-30 10:04:51.350660393 +0000 UTC m=+1104.422511337" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.367899 4970 scope.go:117] "RemoveContainer" containerID="85fc4febe00ad94cadb028bc23ae487f57a71e26656d7b229c0ce7c0fa603c29" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.374612 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.391007 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407036 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:51 crc kubenswrapper[4970]: E0930 10:04:51.407448 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="sg-core" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407467 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="sg-core" Sep 30 10:04:51 crc kubenswrapper[4970]: E0930 10:04:51.407491 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-central-agent" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407498 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-central-agent" Sep 30 10:04:51 crc kubenswrapper[4970]: E0930 10:04:51.407523 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-notification-agent" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407528 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-notification-agent" Sep 30 10:04:51 crc kubenswrapper[4970]: E0930 10:04:51.407540 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="proxy-httpd" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407546 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="proxy-httpd" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407717 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="proxy-httpd" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407737 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-central-agent" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407750 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="sg-core" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.407766 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" containerName="ceilometer-notification-agent" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.409406 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.416824 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.452588 4970 scope.go:117] "RemoveContainer" containerID="fb6133d31f10d7990962b742e879440d606da30daab2d039257b2644a6eac5d4" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.452598 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.452697 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.454370 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f28e5971-ae4a-4985-8f44-f5746ef03dbc-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.486412 4970 scope.go:117] "RemoveContainer" containerID="14fbe4c486f90c0360dced539808d634dba0516324bd955021dea94075569e4c" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556292 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-scripts\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556371 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-run-httpd\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-config-data\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556657 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556731 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6lpk\" (UniqueName: \"kubernetes.io/projected/1085522c-1af4-49ba-82fe-33a0fb1377ba-kube-api-access-w6lpk\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556753 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.556858 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-log-httpd\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658166 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6lpk\" (UniqueName: \"kubernetes.io/projected/1085522c-1af4-49ba-82fe-33a0fb1377ba-kube-api-access-w6lpk\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658213 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658281 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-log-httpd\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658333 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-scripts\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658370 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-run-httpd\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658392 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-config-data\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.658461 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.659660 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-log-httpd\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.659797 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-run-httpd\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.662624 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.662660 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.663455 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-config-data\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.673524 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-scripts\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.677151 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6lpk\" (UniqueName: \"kubernetes.io/projected/1085522c-1af4-49ba-82fe-33a0fb1377ba-kube-api-access-w6lpk\") pod \"ceilometer-0\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " pod="openstack/ceilometer-0" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.691219 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f28e5971-ae4a-4985-8f44-f5746ef03dbc" path="/var/lib/kubelet/pods/f28e5971-ae4a-4985-8f44-f5746ef03dbc/volumes" Sep 30 10:04:51 crc kubenswrapper[4970]: I0930 10:04:51.784693 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:52 crc kubenswrapper[4970]: I0930 10:04:52.240569 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:52 crc kubenswrapper[4970]: I0930 10:04:52.346840 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerStarted","Data":"43465a9c89e5a4dd3e4cb04a80d04d696fb372fee1c030f8a3a53e139a2197c5"} Sep 30 10:04:53 crc kubenswrapper[4970]: I0930 10:04:53.082193 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:53 crc kubenswrapper[4970]: I0930 10:04:53.359792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerStarted","Data":"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1"} Sep 30 10:04:54 crc kubenswrapper[4970]: I0930 10:04:54.372647 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerStarted","Data":"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c"} Sep 30 10:04:56 crc kubenswrapper[4970]: I0930 10:04:56.409707 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerStarted","Data":"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea"} Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.434382 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerStarted","Data":"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995"} Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.434951 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-central-agent" containerID="cri-o://d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" gracePeriod=30 Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.435390 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.435831 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="proxy-httpd" containerID="cri-o://8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" gracePeriod=30 Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.435921 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="sg-core" containerID="cri-o://f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" gracePeriod=30 Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.436018 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-notification-agent" containerID="cri-o://3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" gracePeriod=30 Sep 30 10:04:58 crc kubenswrapper[4970]: I0930 10:04:58.473686 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.641482741 podStartE2EDuration="7.473659245s" podCreationTimestamp="2025-09-30 10:04:51 +0000 UTC" firstStartedPulling="2025-09-30 10:04:52.250176125 +0000 UTC m=+1105.322027059" lastFinishedPulling="2025-09-30 10:04:57.082352629 +0000 UTC m=+1110.154203563" observedRunningTime="2025-09-30 10:04:58.464730221 +0000 UTC m=+1111.536581155" watchObservedRunningTime="2025-09-30 10:04:58.473659245 +0000 UTC m=+1111.545510179" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.099666 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199286 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-scripts\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199381 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-config-data\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199489 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-run-httpd\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199529 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-sg-core-conf-yaml\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199564 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-log-httpd\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199704 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-combined-ca-bundle\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.199757 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6lpk\" (UniqueName: \"kubernetes.io/projected/1085522c-1af4-49ba-82fe-33a0fb1377ba-kube-api-access-w6lpk\") pod \"1085522c-1af4-49ba-82fe-33a0fb1377ba\" (UID: \"1085522c-1af4-49ba-82fe-33a0fb1377ba\") " Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.201825 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.201999 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.207379 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1085522c-1af4-49ba-82fe-33a0fb1377ba-kube-api-access-w6lpk" (OuterVolumeSpecName: "kube-api-access-w6lpk") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "kube-api-access-w6lpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.207521 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-scripts" (OuterVolumeSpecName: "scripts") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.230377 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.276440 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.299846 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-config-data" (OuterVolumeSpecName: "config-data") pod "1085522c-1af4-49ba-82fe-33a0fb1377ba" (UID: "1085522c-1af4-49ba-82fe-33a0fb1377ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302118 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302144 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302158 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6lpk\" (UniqueName: \"kubernetes.io/projected/1085522c-1af4-49ba-82fe-33a0fb1377ba-kube-api-access-w6lpk\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302168 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302175 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302184 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1085522c-1af4-49ba-82fe-33a0fb1377ba-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.302193 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1085522c-1af4-49ba-82fe-33a0fb1377ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445758 4970 generic.go:334] "Generic (PLEG): container finished" podID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" exitCode=0 Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445796 4970 generic.go:334] "Generic (PLEG): container finished" podID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" exitCode=2 Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445807 4970 generic.go:334] "Generic (PLEG): container finished" podID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" exitCode=0 Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445814 4970 generic.go:334] "Generic (PLEG): container finished" podID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" exitCode=0 Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445833 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerDied","Data":"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995"} Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445858 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerDied","Data":"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea"} Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445869 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerDied","Data":"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c"} Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445878 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerDied","Data":"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1"} Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445887 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1085522c-1af4-49ba-82fe-33a0fb1377ba","Type":"ContainerDied","Data":"43465a9c89e5a4dd3e4cb04a80d04d696fb372fee1c030f8a3a53e139a2197c5"} Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.445903 4970 scope.go:117] "RemoveContainer" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.446052 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.475850 4970 scope.go:117] "RemoveContainer" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.482318 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.490911 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.497788 4970 scope.go:117] "RemoveContainer" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.520565 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.521016 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="sg-core" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521040 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="sg-core" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.521055 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-notification-agent" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521063 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-notification-agent" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.521084 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="proxy-httpd" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521094 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="proxy-httpd" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.521141 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-central-agent" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521150 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-central-agent" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521398 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="sg-core" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521425 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="proxy-httpd" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521452 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-notification-agent" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.521464 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" containerName="ceilometer-central-agent" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.523378 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.525691 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.530504 4970 scope.go:117] "RemoveContainer" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.530586 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.541258 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.592491 4970 scope.go:117] "RemoveContainer" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.595209 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": container with ID starting with 8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995 not found: ID does not exist" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.595275 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995"} err="failed to get container status \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": rpc error: code = NotFound desc = could not find container \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": container with ID starting with 8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.595307 4970 scope.go:117] "RemoveContainer" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.595761 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": container with ID starting with f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea not found: ID does not exist" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.595802 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea"} err="failed to get container status \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": rpc error: code = NotFound desc = could not find container \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": container with ID starting with f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.595829 4970 scope.go:117] "RemoveContainer" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.596326 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": container with ID starting with 3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c not found: ID does not exist" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.596359 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c"} err="failed to get container status \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": rpc error: code = NotFound desc = could not find container \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": container with ID starting with 3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.596376 4970 scope.go:117] "RemoveContainer" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" Sep 30 10:04:59 crc kubenswrapper[4970]: E0930 10:04:59.596652 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": container with ID starting with d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1 not found: ID does not exist" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.596682 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1"} err="failed to get container status \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": rpc error: code = NotFound desc = could not find container \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": container with ID starting with d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.596704 4970 scope.go:117] "RemoveContainer" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.597094 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995"} err="failed to get container status \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": rpc error: code = NotFound desc = could not find container \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": container with ID starting with 8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.597116 4970 scope.go:117] "RemoveContainer" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.597470 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea"} err="failed to get container status \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": rpc error: code = NotFound desc = could not find container \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": container with ID starting with f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.597491 4970 scope.go:117] "RemoveContainer" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.597807 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c"} err="failed to get container status \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": rpc error: code = NotFound desc = could not find container \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": container with ID starting with 3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.597825 4970 scope.go:117] "RemoveContainer" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.598113 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1"} err="failed to get container status \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": rpc error: code = NotFound desc = could not find container \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": container with ID starting with d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.598143 4970 scope.go:117] "RemoveContainer" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.598531 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995"} err="failed to get container status \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": rpc error: code = NotFound desc = could not find container \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": container with ID starting with 8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.598554 4970 scope.go:117] "RemoveContainer" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.598882 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea"} err="failed to get container status \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": rpc error: code = NotFound desc = could not find container \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": container with ID starting with f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.598905 4970 scope.go:117] "RemoveContainer" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.599276 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c"} err="failed to get container status \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": rpc error: code = NotFound desc = could not find container \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": container with ID starting with 3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.599293 4970 scope.go:117] "RemoveContainer" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.599532 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1"} err="failed to get container status \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": rpc error: code = NotFound desc = could not find container \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": container with ID starting with d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.599556 4970 scope.go:117] "RemoveContainer" containerID="8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.599880 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995"} err="failed to get container status \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": rpc error: code = NotFound desc = could not find container \"8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995\": container with ID starting with 8440e32eb9d67c55e7d4d786a4d9dd8d775ad7ac796563003ca8ff623656c995 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.599899 4970 scope.go:117] "RemoveContainer" containerID="f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.600224 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea"} err="failed to get container status \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": rpc error: code = NotFound desc = could not find container \"f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea\": container with ID starting with f47ba26247aca850b3eb03a0d915587dcb5dbc9aea84f753162a0a28078134ea not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.600244 4970 scope.go:117] "RemoveContainer" containerID="3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.600495 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c"} err="failed to get container status \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": rpc error: code = NotFound desc = could not find container \"3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c\": container with ID starting with 3ead13238c850dfe14c329d28aeb1e877f8e4321b71207743eff19d0bddedf8c not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.600525 4970 scope.go:117] "RemoveContainer" containerID="d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.600812 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1"} err="failed to get container status \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": rpc error: code = NotFound desc = could not find container \"d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1\": container with ID starting with d41ee0e8ee41f4ffdfb1a7b86d2bf990265b47e3f428cf28dc8522268e0e61a1 not found: ID does not exist" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.607656 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.607849 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.607891 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-scripts\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.608010 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-config-data\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.608058 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-run-httpd\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.608121 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-log-httpd\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.608229 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr29w\" (UniqueName: \"kubernetes.io/projected/a2e66a43-20be-4a00-bb74-843e6dd7af44-kube-api-access-tr29w\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.681614 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1085522c-1af4-49ba-82fe-33a0fb1377ba" path="/var/lib/kubelet/pods/1085522c-1af4-49ba-82fe-33a0fb1377ba/volumes" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710266 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710311 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-scripts\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710345 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-config-data\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-run-httpd\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710400 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-log-httpd\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710438 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr29w\" (UniqueName: \"kubernetes.io/projected/a2e66a43-20be-4a00-bb74-843e6dd7af44-kube-api-access-tr29w\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.710468 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.711744 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-run-httpd\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.711861 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-log-httpd\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.714267 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.714418 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-scripts\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.714939 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.715977 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-config-data\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.745955 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr29w\" (UniqueName: \"kubernetes.io/projected/a2e66a43-20be-4a00-bb74-843e6dd7af44-kube-api-access-tr29w\") pod \"ceilometer-0\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " pod="openstack/ceilometer-0" Sep 30 10:04:59 crc kubenswrapper[4970]: I0930 10:04:59.912035 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:05:00 crc kubenswrapper[4970]: I0930 10:05:00.405720 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:00 crc kubenswrapper[4970]: W0930 10:05:00.408880 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2e66a43_20be_4a00_bb74_843e6dd7af44.slice/crio-2948ec68aff5f05d01a55ce6ca6f1c9da4d973d9ccb60f98bad3e30f7b24ed56 WatchSource:0}: Error finding container 2948ec68aff5f05d01a55ce6ca6f1c9da4d973d9ccb60f98bad3e30f7b24ed56: Status 404 returned error can't find the container with id 2948ec68aff5f05d01a55ce6ca6f1c9da4d973d9ccb60f98bad3e30f7b24ed56 Sep 30 10:05:00 crc kubenswrapper[4970]: I0930 10:05:00.456266 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerStarted","Data":"2948ec68aff5f05d01a55ce6ca6f1c9da4d973d9ccb60f98bad3e30f7b24ed56"} Sep 30 10:05:01 crc kubenswrapper[4970]: I0930 10:05:01.475617 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerStarted","Data":"d4d2c75f858b63c1613dc019e69124b3257eff29bdcabe41dc96809ff0422a50"} Sep 30 10:05:04 crc kubenswrapper[4970]: I0930 10:05:04.499878 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerStarted","Data":"f741b787d7088767b2694754341aea106b51e5c15900cc3597836247f93cd6bc"} Sep 30 10:05:04 crc kubenswrapper[4970]: I0930 10:05:04.500486 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerStarted","Data":"08e44e1342cca3e7b11350365c8ec8c853c6e6676145e0a847433f07fc3dcacc"} Sep 30 10:05:05 crc kubenswrapper[4970]: I0930 10:05:05.510069 4970 generic.go:334] "Generic (PLEG): container finished" podID="83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" containerID="1367a89aff58b12a8df8031425b319f2849c4ce4cb6336435e25b4c4c2467f7a" exitCode=0 Sep 30 10:05:05 crc kubenswrapper[4970]: I0930 10:05:05.510191 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" event={"ID":"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a","Type":"ContainerDied","Data":"1367a89aff58b12a8df8031425b319f2849c4ce4cb6336435e25b4c4c2467f7a"} Sep 30 10:05:06 crc kubenswrapper[4970]: I0930 10:05:06.523476 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerStarted","Data":"8cc699b8753313310e417e756b2090a09fe9d2dbb186511ac8427eb601af34f2"} Sep 30 10:05:06 crc kubenswrapper[4970]: I0930 10:05:06.523801 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:05:06 crc kubenswrapper[4970]: I0930 10:05:06.561688 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.170877215 podStartE2EDuration="7.561671825s" podCreationTimestamp="2025-09-30 10:04:59 +0000 UTC" firstStartedPulling="2025-09-30 10:05:00.411153376 +0000 UTC m=+1113.483004310" lastFinishedPulling="2025-09-30 10:05:05.801947986 +0000 UTC m=+1118.873798920" observedRunningTime="2025-09-30 10:05:06.553399318 +0000 UTC m=+1119.625250252" watchObservedRunningTime="2025-09-30 10:05:06.561671825 +0000 UTC m=+1119.633522759" Sep 30 10:05:06 crc kubenswrapper[4970]: I0930 10:05:06.962061 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.050416 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-scripts\") pod \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.050581 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-combined-ca-bundle\") pod \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.050702 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-config-data\") pod \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.051420 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qxdd\" (UniqueName: \"kubernetes.io/projected/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-kube-api-access-4qxdd\") pod \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\" (UID: \"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a\") " Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.056198 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-kube-api-access-4qxdd" (OuterVolumeSpecName: "kube-api-access-4qxdd") pod "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" (UID: "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a"). InnerVolumeSpecName "kube-api-access-4qxdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.056236 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-scripts" (OuterVolumeSpecName: "scripts") pod "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" (UID: "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.082373 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-config-data" (OuterVolumeSpecName: "config-data") pod "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" (UID: "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.085912 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" (UID: "83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.153831 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qxdd\" (UniqueName: \"kubernetes.io/projected/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-kube-api-access-4qxdd\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.153863 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.153873 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.153882 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.532511 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.532564 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-n2fjv" event={"ID":"83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a","Type":"ContainerDied","Data":"7aa55c91b4ccf1f16a4e6cc70524c1486d09452d01f1e31d128b5308aa9d0005"} Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.532598 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa55c91b4ccf1f16a4e6cc70524c1486d09452d01f1e31d128b5308aa9d0005" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.643171 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 10:05:07 crc kubenswrapper[4970]: E0930 10:05:07.643605 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" containerName="nova-cell0-conductor-db-sync" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.643626 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" containerName="nova-cell0-conductor-db-sync" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.643887 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" containerName="nova-cell0-conductor-db-sync" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.644923 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.646912 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-b589f" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.649316 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.659795 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.762862 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fda279-0629-46e9-8f55-145febd6facd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.762960 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fda279-0629-46e9-8f55-145febd6facd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.763067 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d62k2\" (UniqueName: \"kubernetes.io/projected/66fda279-0629-46e9-8f55-145febd6facd-kube-api-access-d62k2\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.864872 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fda279-0629-46e9-8f55-145febd6facd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.865284 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fda279-0629-46e9-8f55-145febd6facd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.865390 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d62k2\" (UniqueName: \"kubernetes.io/projected/66fda279-0629-46e9-8f55-145febd6facd-kube-api-access-d62k2\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.869068 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66fda279-0629-46e9-8f55-145febd6facd-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.869638 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66fda279-0629-46e9-8f55-145febd6facd-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.900489 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d62k2\" (UniqueName: \"kubernetes.io/projected/66fda279-0629-46e9-8f55-145febd6facd-kube-api-access-d62k2\") pod \"nova-cell0-conductor-0\" (UID: \"66fda279-0629-46e9-8f55-145febd6facd\") " pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:07 crc kubenswrapper[4970]: I0930 10:05:07.960810 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:08 crc kubenswrapper[4970]: I0930 10:05:08.397825 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Sep 30 10:05:08 crc kubenswrapper[4970]: W0930 10:05:08.422497 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66fda279_0629_46e9_8f55_145febd6facd.slice/crio-aa74a6214f1be204d3b28dab8d7f0794e83c04b9262c87dd2bbd220bbdfedf07 WatchSource:0}: Error finding container aa74a6214f1be204d3b28dab8d7f0794e83c04b9262c87dd2bbd220bbdfedf07: Status 404 returned error can't find the container with id aa74a6214f1be204d3b28dab8d7f0794e83c04b9262c87dd2bbd220bbdfedf07 Sep 30 10:05:08 crc kubenswrapper[4970]: I0930 10:05:08.543350 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66fda279-0629-46e9-8f55-145febd6facd","Type":"ContainerStarted","Data":"aa74a6214f1be204d3b28dab8d7f0794e83c04b9262c87dd2bbd220bbdfedf07"} Sep 30 10:05:09 crc kubenswrapper[4970]: I0930 10:05:09.556722 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"66fda279-0629-46e9-8f55-145febd6facd","Type":"ContainerStarted","Data":"59c2e81a38fc0249933df95be76adf601f45f2eba6c5ee57a4deefd0bf202552"} Sep 30 10:05:09 crc kubenswrapper[4970]: I0930 10:05:09.557941 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:09 crc kubenswrapper[4970]: I0930 10:05:09.594219 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.594202168 podStartE2EDuration="2.594202168s" podCreationTimestamp="2025-09-30 10:05:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:09.591751151 +0000 UTC m=+1122.663602095" watchObservedRunningTime="2025-09-30 10:05:09.594202168 +0000 UTC m=+1122.666053102" Sep 30 10:05:17 crc kubenswrapper[4970]: I0930 10:05:17.990314 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.479518 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-9rsdz"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.481044 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.484254 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.484663 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.490038 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rsdz"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.592554 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-scripts\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.592785 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4jv\" (UniqueName: \"kubernetes.io/projected/21aae404-5fe4-4df2-8f82-b860e665a2d8-kube-api-access-db4jv\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.593053 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.593192 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-config-data\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.652241 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.653682 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.657791 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.676031 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.684892 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.686512 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.689501 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.696554 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db4jv\" (UniqueName: \"kubernetes.io/projected/21aae404-5fe4-4df2-8f82-b860e665a2d8-kube-api-access-db4jv\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.696656 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.696719 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-config-data\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.696834 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-scripts\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.712822 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.728657 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-config-data\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.733122 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-scripts\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.733538 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db4jv\" (UniqueName: \"kubernetes.io/projected/21aae404-5fe4-4df2-8f82-b860e665a2d8-kube-api-access-db4jv\") pod \"nova-cell0-cell-mapping-9rsdz\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.737750 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803483 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-logs\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803609 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803657 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-config-data\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803711 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803758 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803821 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdj6w\" (UniqueName: \"kubernetes.io/projected/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-kube-api-access-sdj6w\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.803910 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zsc4\" (UniqueName: \"kubernetes.io/projected/3d9ea2ae-a60c-42a1-acf1-7249574b296a-kube-api-access-6zsc4\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.808255 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.842205 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.843725 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.848170 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.867071 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.877443 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.907904 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.909318 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbvmb\" (UniqueName: \"kubernetes.io/projected/34917942-3fe8-45f6-9403-db6b9b6ec369-kube-api-access-fbvmb\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913235 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdj6w\" (UniqueName: \"kubernetes.io/projected/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-kube-api-access-sdj6w\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913411 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zsc4\" (UniqueName: \"kubernetes.io/projected/3d9ea2ae-a60c-42a1-acf1-7249574b296a-kube-api-access-6zsc4\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913470 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-logs\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913603 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913643 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-config-data\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913725 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913774 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913848 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-config-data\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913869 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.913891 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34917942-3fe8-45f6-9403-db6b9b6ec369-logs\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.914652 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-logs\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.933709 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.937826 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.939980 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.941161 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.943545 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.948704 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-config-data\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.961903 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-b2trj"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.963882 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.970353 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdj6w\" (UniqueName: \"kubernetes.io/projected/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-kube-api-access-sdj6w\") pod \"nova-api-0\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " pod="openstack/nova-api-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.972476 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zsc4\" (UniqueName: \"kubernetes.io/projected/3d9ea2ae-a60c-42a1-acf1-7249574b296a-kube-api-access-6zsc4\") pod \"nova-scheduler-0\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.989044 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-b2trj"] Sep 30 10:05:18 crc kubenswrapper[4970]: I0930 10:05:18.997013 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.021918 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.021960 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022056 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zchk9\" (UniqueName: \"kubernetes.io/projected/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-kube-api-access-zchk9\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022121 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-svc\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022161 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-config-data\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022177 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022193 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34917942-3fe8-45f6-9403-db6b9b6ec369-logs\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022231 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbvmb\" (UniqueName: \"kubernetes.io/projected/34917942-3fe8-45f6-9403-db6b9b6ec369-kube-api-access-fbvmb\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022265 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022349 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-config\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022374 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxhcg\" (UniqueName: \"kubernetes.io/projected/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-kube-api-access-vxhcg\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.022392 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.023721 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34917942-3fe8-45f6-9403-db6b9b6ec369-logs\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.027613 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.029567 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-config-data\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.049949 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbvmb\" (UniqueName: \"kubernetes.io/projected/34917942-3fe8-45f6-9403-db6b9b6ec369-kube-api-access-fbvmb\") pod \"nova-metadata-0\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.051234 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125174 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125720 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-config\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125763 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxhcg\" (UniqueName: \"kubernetes.io/projected/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-kube-api-access-vxhcg\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125784 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125810 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125834 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125882 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zchk9\" (UniqueName: \"kubernetes.io/projected/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-kube-api-access-zchk9\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125915 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.125955 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-svc\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.127449 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-svc\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.129677 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.130347 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-config\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.135363 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.135856 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.141771 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.149900 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.154531 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zchk9\" (UniqueName: \"kubernetes.io/projected/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-kube-api-access-zchk9\") pod \"dnsmasq-dns-865f5d856f-b2trj\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.157534 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxhcg\" (UniqueName: \"kubernetes.io/projected/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-kube-api-access-vxhcg\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.159125 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.371615 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.398495 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.647165 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rsdz"] Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.661154 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:19 crc kubenswrapper[4970]: W0930 10:05:19.666429 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21aae404_5fe4_4df2_8f82_b860e665a2d8.slice/crio-9b1610ec9317a82da8e4184892bcac73c03937effe3e25535a2ddf47e2e7747d WatchSource:0}: Error finding container 9b1610ec9317a82da8e4184892bcac73c03937effe3e25535a2ddf47e2e7747d: Status 404 returned error can't find the container with id 9b1610ec9317a82da8e4184892bcac73c03937effe3e25535a2ddf47e2e7747d Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.720044 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnxht"] Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.726921 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnxht"] Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.727071 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.732183 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.732477 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.801585 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:19 crc kubenswrapper[4970]: W0930 10:05:19.812116 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod34917942_3fe8_45f6_9403_db6b9b6ec369.slice/crio-05af4271bef1804d92323f1e55e13d8ce279f7c8ab47dac7a7c82834a55e06f0 WatchSource:0}: Error finding container 05af4271bef1804d92323f1e55e13d8ce279f7c8ab47dac7a7c82834a55e06f0: Status 404 returned error can't find the container with id 05af4271bef1804d92323f1e55e13d8ce279f7c8ab47dac7a7c82834a55e06f0 Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.864428 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg9dv\" (UniqueName: \"kubernetes.io/projected/69d21ef3-2e4e-4226-935f-b09feb8c4d19-kube-api-access-qg9dv\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.864754 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-config-data\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.864974 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.865481 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-scripts\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.871909 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.967420 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-scripts\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.967781 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg9dv\" (UniqueName: \"kubernetes.io/projected/69d21ef3-2e4e-4226-935f-b09feb8c4d19-kube-api-access-qg9dv\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.967890 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-config-data\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.967944 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.970181 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-b2trj"] Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.973908 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-config-data\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.975184 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-scripts\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.977809 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.985507 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:19 crc kubenswrapper[4970]: I0930 10:05:19.995907 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg9dv\" (UniqueName: \"kubernetes.io/projected/69d21ef3-2e4e-4226-935f-b09feb8c4d19-kube-api-access-qg9dv\") pod \"nova-cell1-conductor-db-sync-fnxht\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.264590 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.664562 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rsdz" event={"ID":"21aae404-5fe4-4df2-8f82-b860e665a2d8","Type":"ContainerStarted","Data":"c0e88f3b9d194702638da51d118b3042eb0387a108bc02f9b7cde503f0e9a87e"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.666195 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rsdz" event={"ID":"21aae404-5fe4-4df2-8f82-b860e665a2d8","Type":"ContainerStarted","Data":"9b1610ec9317a82da8e4184892bcac73c03937effe3e25535a2ddf47e2e7747d"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.667982 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df2bf7be-ff3b-4a81-a65b-bb4848d39fda","Type":"ContainerStarted","Data":"206b4aa811482087b99e7304587e5d363831ae9eedeac5e9a49246cdebdc96fd"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.670167 4970 generic.go:334] "Generic (PLEG): container finished" podID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerID="a555b90baf26b9cef620793203b081544bce0b44e423317548638925cfadca3b" exitCode=0 Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.670233 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" event={"ID":"6fcf5f84-7342-4cf4-864c-041b56ab8dbd","Type":"ContainerDied","Data":"a555b90baf26b9cef620793203b081544bce0b44e423317548638925cfadca3b"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.670262 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" event={"ID":"6fcf5f84-7342-4cf4-864c-041b56ab8dbd","Type":"ContainerStarted","Data":"c8d97988e9e025cd29f6e64c708a0a6239953d78c18b402741982210b6e05428"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.671956 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34917942-3fe8-45f6-9403-db6b9b6ec369","Type":"ContainerStarted","Data":"05af4271bef1804d92323f1e55e13d8ce279f7c8ab47dac7a7c82834a55e06f0"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.673459 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d9ea2ae-a60c-42a1-acf1-7249574b296a","Type":"ContainerStarted","Data":"cebbd20031e6cfaa12f33d99d8914f04a6faca748890e79c0ab6b21c28068f24"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.674942 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51","Type":"ContainerStarted","Data":"5b2dde908877358793ee3abef161e2489cb8b47f07926dc9892ae0402493a203"} Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.693452 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-9rsdz" podStartSLOduration=2.693427217 podStartE2EDuration="2.693427217s" podCreationTimestamp="2025-09-30 10:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:20.686746164 +0000 UTC m=+1133.758597098" watchObservedRunningTime="2025-09-30 10:05:20.693427217 +0000 UTC m=+1133.765278151" Sep 30 10:05:20 crc kubenswrapper[4970]: I0930 10:05:20.798103 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnxht"] Sep 30 10:05:20 crc kubenswrapper[4970]: W0930 10:05:20.810761 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69d21ef3_2e4e_4226_935f_b09feb8c4d19.slice/crio-a77b3000b5500307e850b80ff98f21106b4ac9e52237763c8f0219161d745275 WatchSource:0}: Error finding container a77b3000b5500307e850b80ff98f21106b4ac9e52237763c8f0219161d745275: Status 404 returned error can't find the container with id a77b3000b5500307e850b80ff98f21106b4ac9e52237763c8f0219161d745275 Sep 30 10:05:21 crc kubenswrapper[4970]: I0930 10:05:21.692712 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnxht" event={"ID":"69d21ef3-2e4e-4226-935f-b09feb8c4d19","Type":"ContainerStarted","Data":"d32a28edb56f970e173492cb5914adbf73d93dd41742703b61dfead0036cf1c5"} Sep 30 10:05:21 crc kubenswrapper[4970]: I0930 10:05:21.693089 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnxht" event={"ID":"69d21ef3-2e4e-4226-935f-b09feb8c4d19","Type":"ContainerStarted","Data":"a77b3000b5500307e850b80ff98f21106b4ac9e52237763c8f0219161d745275"} Sep 30 10:05:21 crc kubenswrapper[4970]: I0930 10:05:21.694954 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" event={"ID":"6fcf5f84-7342-4cf4-864c-041b56ab8dbd","Type":"ContainerStarted","Data":"e4ec6327b128effe3978c3cdd6a3300e2de8fb083ec9b519e7a4902398876f4c"} Sep 30 10:05:21 crc kubenswrapper[4970]: I0930 10:05:21.714272 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-fnxht" podStartSLOduration=2.714246866 podStartE2EDuration="2.714246866s" podCreationTimestamp="2025-09-30 10:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:21.711508511 +0000 UTC m=+1134.783359455" watchObservedRunningTime="2025-09-30 10:05:21.714246866 +0000 UTC m=+1134.786097800" Sep 30 10:05:21 crc kubenswrapper[4970]: I0930 10:05:21.738827 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" podStartSLOduration=3.738804549 podStartE2EDuration="3.738804549s" podCreationTimestamp="2025-09-30 10:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:21.728827545 +0000 UTC m=+1134.800678479" watchObservedRunningTime="2025-09-30 10:05:21.738804549 +0000 UTC m=+1134.810655483" Sep 30 10:05:22 crc kubenswrapper[4970]: I0930 10:05:22.660783 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:22 crc kubenswrapper[4970]: I0930 10:05:22.704402 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:22 crc kubenswrapper[4970]: I0930 10:05:22.713084 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.713713 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34917942-3fe8-45f6-9403-db6b9b6ec369","Type":"ContainerStarted","Data":"18ab2a4101dedd5dc4e88e4cdb6c06dfdc34c8cf853a7113018caf765aa43c43"} Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.713971 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34917942-3fe8-45f6-9403-db6b9b6ec369","Type":"ContainerStarted","Data":"85420283b41dfcf9c4fd16a5bcc1b3266bd3974130d4aceda60605ca2424f087"} Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.714103 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-metadata" containerID="cri-o://18ab2a4101dedd5dc4e88e4cdb6c06dfdc34c8cf853a7113018caf765aa43c43" gracePeriod=30 Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.714075 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-log" containerID="cri-o://85420283b41dfcf9c4fd16a5bcc1b3266bd3974130d4aceda60605ca2424f087" gracePeriod=30 Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.720675 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d9ea2ae-a60c-42a1-acf1-7249574b296a","Type":"ContainerStarted","Data":"8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea"} Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.735543 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51","Type":"ContainerStarted","Data":"b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619"} Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.735641 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619" gracePeriod=30 Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.749509 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df2bf7be-ff3b-4a81-a65b-bb4848d39fda","Type":"ContainerStarted","Data":"92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e"} Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.749556 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df2bf7be-ff3b-4a81-a65b-bb4848d39fda","Type":"ContainerStarted","Data":"d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19"} Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.787709 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.741489606 podStartE2EDuration="5.787694214s" podCreationTimestamp="2025-09-30 10:05:18 +0000 UTC" firstStartedPulling="2025-09-30 10:05:19.816341631 +0000 UTC m=+1132.888192565" lastFinishedPulling="2025-09-30 10:05:22.862546229 +0000 UTC m=+1135.934397173" observedRunningTime="2025-09-30 10:05:23.751324027 +0000 UTC m=+1136.823174951" watchObservedRunningTime="2025-09-30 10:05:23.787694214 +0000 UTC m=+1136.859545148" Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.788122 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.820791991 podStartE2EDuration="5.788117986s" podCreationTimestamp="2025-09-30 10:05:18 +0000 UTC" firstStartedPulling="2025-09-30 10:05:19.890546796 +0000 UTC m=+1132.962397730" lastFinishedPulling="2025-09-30 10:05:22.857872791 +0000 UTC m=+1135.929723725" observedRunningTime="2025-09-30 10:05:23.785258097 +0000 UTC m=+1136.857109031" watchObservedRunningTime="2025-09-30 10:05:23.788117986 +0000 UTC m=+1136.859968920" Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.814281 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.9926248319999997 podStartE2EDuration="5.814264033s" podCreationTimestamp="2025-09-30 10:05:18 +0000 UTC" firstStartedPulling="2025-09-30 10:05:20.033595158 +0000 UTC m=+1133.105446102" lastFinishedPulling="2025-09-30 10:05:22.855234369 +0000 UTC m=+1135.927085303" observedRunningTime="2025-09-30 10:05:23.810555701 +0000 UTC m=+1136.882406635" watchObservedRunningTime="2025-09-30 10:05:23.814264033 +0000 UTC m=+1136.886114967" Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.842514 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.643088178 podStartE2EDuration="5.842497997s" podCreationTimestamp="2025-09-30 10:05:18 +0000 UTC" firstStartedPulling="2025-09-30 10:05:19.672356433 +0000 UTC m=+1132.744207367" lastFinishedPulling="2025-09-30 10:05:22.871766252 +0000 UTC m=+1135.943617186" observedRunningTime="2025-09-30 10:05:23.841956482 +0000 UTC m=+1136.913807426" watchObservedRunningTime="2025-09-30 10:05:23.842497997 +0000 UTC m=+1136.914348921" Sep 30 10:05:23 crc kubenswrapper[4970]: I0930 10:05:23.998496 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.052601 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.052697 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.372087 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.761522 4970 generic.go:334] "Generic (PLEG): container finished" podID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerID="18ab2a4101dedd5dc4e88e4cdb6c06dfdc34c8cf853a7113018caf765aa43c43" exitCode=0 Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.761559 4970 generic.go:334] "Generic (PLEG): container finished" podID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerID="85420283b41dfcf9c4fd16a5bcc1b3266bd3974130d4aceda60605ca2424f087" exitCode=143 Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.761598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34917942-3fe8-45f6-9403-db6b9b6ec369","Type":"ContainerDied","Data":"18ab2a4101dedd5dc4e88e4cdb6c06dfdc34c8cf853a7113018caf765aa43c43"} Sep 30 10:05:24 crc kubenswrapper[4970]: I0930 10:05:24.761637 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34917942-3fe8-45f6-9403-db6b9b6ec369","Type":"ContainerDied","Data":"85420283b41dfcf9c4fd16a5bcc1b3266bd3974130d4aceda60605ca2424f087"} Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.258894 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.419910 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-config-data\") pod \"34917942-3fe8-45f6-9403-db6b9b6ec369\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.419980 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbvmb\" (UniqueName: \"kubernetes.io/projected/34917942-3fe8-45f6-9403-db6b9b6ec369-kube-api-access-fbvmb\") pod \"34917942-3fe8-45f6-9403-db6b9b6ec369\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.420069 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34917942-3fe8-45f6-9403-db6b9b6ec369-logs\") pod \"34917942-3fe8-45f6-9403-db6b9b6ec369\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.420208 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-combined-ca-bundle\") pod \"34917942-3fe8-45f6-9403-db6b9b6ec369\" (UID: \"34917942-3fe8-45f6-9403-db6b9b6ec369\") " Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.421411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34917942-3fe8-45f6-9403-db6b9b6ec369-logs" (OuterVolumeSpecName: "logs") pod "34917942-3fe8-45f6-9403-db6b9b6ec369" (UID: "34917942-3fe8-45f6-9403-db6b9b6ec369"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.436272 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34917942-3fe8-45f6-9403-db6b9b6ec369-kube-api-access-fbvmb" (OuterVolumeSpecName: "kube-api-access-fbvmb") pod "34917942-3fe8-45f6-9403-db6b9b6ec369" (UID: "34917942-3fe8-45f6-9403-db6b9b6ec369"). InnerVolumeSpecName "kube-api-access-fbvmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.449777 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34917942-3fe8-45f6-9403-db6b9b6ec369" (UID: "34917942-3fe8-45f6-9403-db6b9b6ec369"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.453210 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-config-data" (OuterVolumeSpecName: "config-data") pod "34917942-3fe8-45f6-9403-db6b9b6ec369" (UID: "34917942-3fe8-45f6-9403-db6b9b6ec369"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.522384 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34917942-3fe8-45f6-9403-db6b9b6ec369-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.522419 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.522433 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34917942-3fe8-45f6-9403-db6b9b6ec369-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.522444 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbvmb\" (UniqueName: \"kubernetes.io/projected/34917942-3fe8-45f6-9403-db6b9b6ec369-kube-api-access-fbvmb\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.773663 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"34917942-3fe8-45f6-9403-db6b9b6ec369","Type":"ContainerDied","Data":"05af4271bef1804d92323f1e55e13d8ce279f7c8ab47dac7a7c82834a55e06f0"} Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.774200 4970 scope.go:117] "RemoveContainer" containerID="18ab2a4101dedd5dc4e88e4cdb6c06dfdc34c8cf853a7113018caf765aa43c43" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.773685 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.802952 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.803282 4970 scope.go:117] "RemoveContainer" containerID="85420283b41dfcf9c4fd16a5bcc1b3266bd3974130d4aceda60605ca2424f087" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.820362 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.838852 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:25 crc kubenswrapper[4970]: E0930 10:05:25.839468 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-log" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.839503 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-log" Sep 30 10:05:25 crc kubenswrapper[4970]: E0930 10:05:25.839572 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-metadata" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.839594 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-metadata" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.839944 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-log" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.840047 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" containerName="nova-metadata-metadata" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.841637 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.844788 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.845260 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.859164 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.929896 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4413d76e-a986-40cd-85d0-b1dafab11bd2-logs\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.929987 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.930403 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvs94\" (UniqueName: \"kubernetes.io/projected/4413d76e-a986-40cd-85d0-b1dafab11bd2-kube-api-access-cvs94\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.930973 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-config-data\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:25 crc kubenswrapper[4970]: I0930 10:05:25.931033 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.033948 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvs94\" (UniqueName: \"kubernetes.io/projected/4413d76e-a986-40cd-85d0-b1dafab11bd2-kube-api-access-cvs94\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.034305 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-config-data\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.034418 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.034666 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4413d76e-a986-40cd-85d0-b1dafab11bd2-logs\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.034849 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.035089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4413d76e-a986-40cd-85d0-b1dafab11bd2-logs\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.039225 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.040463 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.049462 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-config-data\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.072654 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvs94\" (UniqueName: \"kubernetes.io/projected/4413d76e-a986-40cd-85d0-b1dafab11bd2-kube-api-access-cvs94\") pod \"nova-metadata-0\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.190841 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.631736 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:26 crc kubenswrapper[4970]: I0930 10:05:26.786813 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4413d76e-a986-40cd-85d0-b1dafab11bd2","Type":"ContainerStarted","Data":"2e24c7bd87bf034e39a7508ec1ff9cd75274e5b85666d707ecd9f5edf0759ae8"} Sep 30 10:05:27 crc kubenswrapper[4970]: I0930 10:05:27.685750 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34917942-3fe8-45f6-9403-db6b9b6ec369" path="/var/lib/kubelet/pods/34917942-3fe8-45f6-9403-db6b9b6ec369/volumes" Sep 30 10:05:27 crc kubenswrapper[4970]: I0930 10:05:27.797578 4970 generic.go:334] "Generic (PLEG): container finished" podID="21aae404-5fe4-4df2-8f82-b860e665a2d8" containerID="c0e88f3b9d194702638da51d118b3042eb0387a108bc02f9b7cde503f0e9a87e" exitCode=0 Sep 30 10:05:27 crc kubenswrapper[4970]: I0930 10:05:27.797646 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rsdz" event={"ID":"21aae404-5fe4-4df2-8f82-b860e665a2d8","Type":"ContainerDied","Data":"c0e88f3b9d194702638da51d118b3042eb0387a108bc02f9b7cde503f0e9a87e"} Sep 30 10:05:27 crc kubenswrapper[4970]: I0930 10:05:27.800174 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4413d76e-a986-40cd-85d0-b1dafab11bd2","Type":"ContainerStarted","Data":"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee"} Sep 30 10:05:27 crc kubenswrapper[4970]: I0930 10:05:27.800216 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4413d76e-a986-40cd-85d0-b1dafab11bd2","Type":"ContainerStarted","Data":"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715"} Sep 30 10:05:27 crc kubenswrapper[4970]: I0930 10:05:27.839247 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.8392284549999998 podStartE2EDuration="2.839228455s" podCreationTimestamp="2025-09-30 10:05:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:27.832805879 +0000 UTC m=+1140.904656813" watchObservedRunningTime="2025-09-30 10:05:27.839228455 +0000 UTC m=+1140.911079379" Sep 30 10:05:28 crc kubenswrapper[4970]: I0930 10:05:28.997913 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.023726 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.160037 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.160111 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.189174 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.306344 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-scripts\") pod \"21aae404-5fe4-4df2-8f82-b860e665a2d8\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.306408 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-combined-ca-bundle\") pod \"21aae404-5fe4-4df2-8f82-b860e665a2d8\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.306525 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-config-data\") pod \"21aae404-5fe4-4df2-8f82-b860e665a2d8\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.306569 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db4jv\" (UniqueName: \"kubernetes.io/projected/21aae404-5fe4-4df2-8f82-b860e665a2d8-kube-api-access-db4jv\") pod \"21aae404-5fe4-4df2-8f82-b860e665a2d8\" (UID: \"21aae404-5fe4-4df2-8f82-b860e665a2d8\") " Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.313933 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-scripts" (OuterVolumeSpecName: "scripts") pod "21aae404-5fe4-4df2-8f82-b860e665a2d8" (UID: "21aae404-5fe4-4df2-8f82-b860e665a2d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.314753 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21aae404-5fe4-4df2-8f82-b860e665a2d8-kube-api-access-db4jv" (OuterVolumeSpecName: "kube-api-access-db4jv") pod "21aae404-5fe4-4df2-8f82-b860e665a2d8" (UID: "21aae404-5fe4-4df2-8f82-b860e665a2d8"). InnerVolumeSpecName "kube-api-access-db4jv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.345032 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21aae404-5fe4-4df2-8f82-b860e665a2d8" (UID: "21aae404-5fe4-4df2-8f82-b860e665a2d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.358567 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-config-data" (OuterVolumeSpecName: "config-data") pod "21aae404-5fe4-4df2-8f82-b860e665a2d8" (UID: "21aae404-5fe4-4df2-8f82-b860e665a2d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.400293 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.408773 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.408809 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db4jv\" (UniqueName: \"kubernetes.io/projected/21aae404-5fe4-4df2-8f82-b860e665a2d8-kube-api-access-db4jv\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.408821 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.408835 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21aae404-5fe4-4df2-8f82-b860e665a2d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.476920 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-lmwbr"] Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.477311 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" containerName="dnsmasq-dns" containerID="cri-o://711b84e89636e6150e556c209d473eb869d26dd245ee4923717a1649878c1817" gracePeriod=10 Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.838147 4970 generic.go:334] "Generic (PLEG): container finished" podID="ae142b63-43b3-488d-ab6d-327b057279b7" containerID="711b84e89636e6150e556c209d473eb869d26dd245ee4923717a1649878c1817" exitCode=0 Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.838225 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" event={"ID":"ae142b63-43b3-488d-ab6d-327b057279b7","Type":"ContainerDied","Data":"711b84e89636e6150e556c209d473eb869d26dd245ee4923717a1649878c1817"} Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.851557 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-9rsdz" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.851660 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-9rsdz" event={"ID":"21aae404-5fe4-4df2-8f82-b860e665a2d8","Type":"ContainerDied","Data":"9b1610ec9317a82da8e4184892bcac73c03937effe3e25535a2ddf47e2e7747d"} Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.852303 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1610ec9317a82da8e4184892bcac73c03937effe3e25535a2ddf47e2e7747d" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.858891 4970 generic.go:334] "Generic (PLEG): container finished" podID="69d21ef3-2e4e-4226-935f-b09feb8c4d19" containerID="d32a28edb56f970e173492cb5914adbf73d93dd41742703b61dfead0036cf1c5" exitCode=0 Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.860341 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnxht" event={"ID":"69d21ef3-2e4e-4226-935f-b09feb8c4d19","Type":"ContainerDied","Data":"d32a28edb56f970e173492cb5914adbf73d93dd41742703b61dfead0036cf1c5"} Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.907653 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.912088 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:05:29 crc kubenswrapper[4970]: I0930 10:05:29.917397 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.012139 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.012334 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-log" containerID="cri-o://d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19" gracePeriod=30 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.012696 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-api" containerID="cri-o://92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e" gracePeriod=30 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.021465 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-config\") pod \"ae142b63-43b3-488d-ab6d-327b057279b7\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.021604 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-sb\") pod \"ae142b63-43b3-488d-ab6d-327b057279b7\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.021668 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-swift-storage-0\") pod \"ae142b63-43b3-488d-ab6d-327b057279b7\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.021727 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-svc\") pod \"ae142b63-43b3-488d-ab6d-327b057279b7\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.021830 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdkp6\" (UniqueName: \"kubernetes.io/projected/ae142b63-43b3-488d-ab6d-327b057279b7-kube-api-access-rdkp6\") pod \"ae142b63-43b3-488d-ab6d-327b057279b7\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.021963 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-nb\") pod \"ae142b63-43b3-488d-ab6d-327b057279b7\" (UID: \"ae142b63-43b3-488d-ab6d-327b057279b7\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.025103 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.025643 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.039303 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae142b63-43b3-488d-ab6d-327b057279b7-kube-api-access-rdkp6" (OuterVolumeSpecName: "kube-api-access-rdkp6") pod "ae142b63-43b3-488d-ab6d-327b057279b7" (UID: "ae142b63-43b3-488d-ab6d-327b057279b7"). InnerVolumeSpecName "kube-api-access-rdkp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.069151 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.069400 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-log" containerID="cri-o://70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715" gracePeriod=30 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.070412 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-metadata" containerID="cri-o://e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee" gracePeriod=30 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.124249 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdkp6\" (UniqueName: \"kubernetes.io/projected/ae142b63-43b3-488d-ab6d-327b057279b7-kube-api-access-rdkp6\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.126904 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ae142b63-43b3-488d-ab6d-327b057279b7" (UID: "ae142b63-43b3-488d-ab6d-327b057279b7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.138498 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ae142b63-43b3-488d-ab6d-327b057279b7" (UID: "ae142b63-43b3-488d-ab6d-327b057279b7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.156619 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-config" (OuterVolumeSpecName: "config") pod "ae142b63-43b3-488d-ab6d-327b057279b7" (UID: "ae142b63-43b3-488d-ab6d-327b057279b7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.174527 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ae142b63-43b3-488d-ab6d-327b057279b7" (UID: "ae142b63-43b3-488d-ab6d-327b057279b7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.178523 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ae142b63-43b3-488d-ab6d-327b057279b7" (UID: "ae142b63-43b3-488d-ab6d-327b057279b7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.227094 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.227130 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.227141 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.227152 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.227163 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ae142b63-43b3-488d-ab6d-327b057279b7-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.503566 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.702811 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.841944 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvs94\" (UniqueName: \"kubernetes.io/projected/4413d76e-a986-40cd-85d0-b1dafab11bd2-kube-api-access-cvs94\") pod \"4413d76e-a986-40cd-85d0-b1dafab11bd2\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.842297 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4413d76e-a986-40cd-85d0-b1dafab11bd2-logs\") pod \"4413d76e-a986-40cd-85d0-b1dafab11bd2\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.842495 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-combined-ca-bundle\") pod \"4413d76e-a986-40cd-85d0-b1dafab11bd2\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.842645 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4413d76e-a986-40cd-85d0-b1dafab11bd2-logs" (OuterVolumeSpecName: "logs") pod "4413d76e-a986-40cd-85d0-b1dafab11bd2" (UID: "4413d76e-a986-40cd-85d0-b1dafab11bd2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.842653 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-config-data\") pod \"4413d76e-a986-40cd-85d0-b1dafab11bd2\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.842841 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-nova-metadata-tls-certs\") pod \"4413d76e-a986-40cd-85d0-b1dafab11bd2\" (UID: \"4413d76e-a986-40cd-85d0-b1dafab11bd2\") " Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.843901 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4413d76e-a986-40cd-85d0-b1dafab11bd2-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.848632 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4413d76e-a986-40cd-85d0-b1dafab11bd2-kube-api-access-cvs94" (OuterVolumeSpecName: "kube-api-access-cvs94") pod "4413d76e-a986-40cd-85d0-b1dafab11bd2" (UID: "4413d76e-a986-40cd-85d0-b1dafab11bd2"). InnerVolumeSpecName "kube-api-access-cvs94". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.873075 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4413d76e-a986-40cd-85d0-b1dafab11bd2" (UID: "4413d76e-a986-40cd-85d0-b1dafab11bd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.876229 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-config-data" (OuterVolumeSpecName: "config-data") pod "4413d76e-a986-40cd-85d0-b1dafab11bd2" (UID: "4413d76e-a986-40cd-85d0-b1dafab11bd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.878628 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" event={"ID":"ae142b63-43b3-488d-ab6d-327b057279b7","Type":"ContainerDied","Data":"deaed87996c739b5eb53eee0ad0fa3f11c27d1eebe2452e5abc0359cfe16cf7a"} Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.878704 4970 scope.go:117] "RemoveContainer" containerID="711b84e89636e6150e556c209d473eb869d26dd245ee4923717a1649878c1817" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.878938 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-lmwbr" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.888184 4970 generic.go:334] "Generic (PLEG): container finished" podID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerID="d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19" exitCode=143 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.888251 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df2bf7be-ff3b-4a81-a65b-bb4848d39fda","Type":"ContainerDied","Data":"d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19"} Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.895820 4970 generic.go:334] "Generic (PLEG): container finished" podID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerID="e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee" exitCode=0 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.896184 4970 generic.go:334] "Generic (PLEG): container finished" podID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerID="70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715" exitCode=143 Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.896524 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.897365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4413d76e-a986-40cd-85d0-b1dafab11bd2","Type":"ContainerDied","Data":"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee"} Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.897408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4413d76e-a986-40cd-85d0-b1dafab11bd2","Type":"ContainerDied","Data":"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715"} Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.897420 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4413d76e-a986-40cd-85d0-b1dafab11bd2","Type":"ContainerDied","Data":"2e24c7bd87bf034e39a7508ec1ff9cd75274e5b85666d707ecd9f5edf0759ae8"} Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.930297 4970 scope.go:117] "RemoveContainer" containerID="8b1a6f4aeeec8b807a8ba39e8e71940dbbdbcc74ded84e1e3448569270e0c732" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.936347 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4413d76e-a986-40cd-85d0-b1dafab11bd2" (UID: "4413d76e-a986-40cd-85d0-b1dafab11bd2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.941902 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-lmwbr"] Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.945659 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.945683 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.945696 4970 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4413d76e-a986-40cd-85d0-b1dafab11bd2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.945705 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvs94\" (UniqueName: \"kubernetes.io/projected/4413d76e-a986-40cd-85d0-b1dafab11bd2-kube-api-access-cvs94\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.949839 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-lmwbr"] Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.957654 4970 scope.go:117] "RemoveContainer" containerID="e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee" Sep 30 10:05:30 crc kubenswrapper[4970]: I0930 10:05:30.985865 4970 scope.go:117] "RemoveContainer" containerID="70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.033013 4970 scope.go:117] "RemoveContainer" containerID="e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.051208 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee\": container with ID starting with e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee not found: ID does not exist" containerID="e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.051273 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee"} err="failed to get container status \"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee\": rpc error: code = NotFound desc = could not find container \"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee\": container with ID starting with e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee not found: ID does not exist" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.051312 4970 scope.go:117] "RemoveContainer" containerID="70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.052433 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715\": container with ID starting with 70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715 not found: ID does not exist" containerID="70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.052488 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715"} err="failed to get container status \"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715\": rpc error: code = NotFound desc = could not find container \"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715\": container with ID starting with 70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715 not found: ID does not exist" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.052522 4970 scope.go:117] "RemoveContainer" containerID="e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.052787 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee"} err="failed to get container status \"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee\": rpc error: code = NotFound desc = could not find container \"e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee\": container with ID starting with e4ee3c0aaf84bf4e5fa883a3c2da67203c2e9d196e1d547680e68ec4fe59ebee not found: ID does not exist" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.052806 4970 scope.go:117] "RemoveContainer" containerID="70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.054145 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715"} err="failed to get container status \"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715\": rpc error: code = NotFound desc = could not find container \"70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715\": container with ID starting with 70322b655f0403ebd82367ca9c306463cbf407b6ab1ab18b9ef1716d65f8d715 not found: ID does not exist" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.261579 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.282645 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.300601 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.307936 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.308464 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69d21ef3-2e4e-4226-935f-b09feb8c4d19" containerName="nova-cell1-conductor-db-sync" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308492 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="69d21ef3-2e4e-4226-935f-b09feb8c4d19" containerName="nova-cell1-conductor-db-sync" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.308509 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" containerName="dnsmasq-dns" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308518 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" containerName="dnsmasq-dns" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.308528 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-log" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308536 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-log" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.308564 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" containerName="init" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308590 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" containerName="init" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.308621 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-metadata" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308631 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-metadata" Sep 30 10:05:31 crc kubenswrapper[4970]: E0930 10:05:31.308654 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21aae404-5fe4-4df2-8f82-b860e665a2d8" containerName="nova-manage" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308662 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="21aae404-5fe4-4df2-8f82-b860e665a2d8" containerName="nova-manage" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308880 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="21aae404-5fe4-4df2-8f82-b860e665a2d8" containerName="nova-manage" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308907 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" containerName="dnsmasq-dns" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308922 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-log" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308931 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" containerName="nova-metadata-metadata" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.308944 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="69d21ef3-2e4e-4226-935f-b09feb8c4d19" containerName="nova-cell1-conductor-db-sync" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.310004 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.313881 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.315330 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.348802 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.359473 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-combined-ca-bundle\") pod \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.359548 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg9dv\" (UniqueName: \"kubernetes.io/projected/69d21ef3-2e4e-4226-935f-b09feb8c4d19-kube-api-access-qg9dv\") pod \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.359610 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-config-data\") pod \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.359758 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-scripts\") pod \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\" (UID: \"69d21ef3-2e4e-4226-935f-b09feb8c4d19\") " Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.360493 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41f183c-fb01-431c-aa98-75bb7b7bb669-logs\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.360572 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.360625 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-config-data\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.360688 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.360731 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7p4h\" (UniqueName: \"kubernetes.io/projected/b41f183c-fb01-431c-aa98-75bb7b7bb669-kube-api-access-f7p4h\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.384701 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69d21ef3-2e4e-4226-935f-b09feb8c4d19-kube-api-access-qg9dv" (OuterVolumeSpecName: "kube-api-access-qg9dv") pod "69d21ef3-2e4e-4226-935f-b09feb8c4d19" (UID: "69d21ef3-2e4e-4226-935f-b09feb8c4d19"). InnerVolumeSpecName "kube-api-access-qg9dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.389164 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-scripts" (OuterVolumeSpecName: "scripts") pod "69d21ef3-2e4e-4226-935f-b09feb8c4d19" (UID: "69d21ef3-2e4e-4226-935f-b09feb8c4d19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.438452 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69d21ef3-2e4e-4226-935f-b09feb8c4d19" (UID: "69d21ef3-2e4e-4226-935f-b09feb8c4d19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.438605 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-config-data" (OuterVolumeSpecName: "config-data") pod "69d21ef3-2e4e-4226-935f-b09feb8c4d19" (UID: "69d21ef3-2e4e-4226-935f-b09feb8c4d19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463447 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41f183c-fb01-431c-aa98-75bb7b7bb669-logs\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463608 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463698 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-config-data\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463767 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463800 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7p4h\" (UniqueName: \"kubernetes.io/projected/b41f183c-fb01-431c-aa98-75bb7b7bb669-kube-api-access-f7p4h\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463876 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463891 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg9dv\" (UniqueName: \"kubernetes.io/projected/69d21ef3-2e4e-4226-935f-b09feb8c4d19-kube-api-access-qg9dv\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463904 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463914 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69d21ef3-2e4e-4226-935f-b09feb8c4d19-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.463973 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41f183c-fb01-431c-aa98-75bb7b7bb669-logs\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.467504 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.467619 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.468135 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-config-data\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.482449 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7p4h\" (UniqueName: \"kubernetes.io/projected/b41f183c-fb01-431c-aa98-75bb7b7bb669-kube-api-access-f7p4h\") pod \"nova-metadata-0\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.638181 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.681692 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4413d76e-a986-40cd-85d0-b1dafab11bd2" path="/var/lib/kubelet/pods/4413d76e-a986-40cd-85d0-b1dafab11bd2/volumes" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.682370 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae142b63-43b3-488d-ab6d-327b057279b7" path="/var/lib/kubelet/pods/ae142b63-43b3-488d-ab6d-327b057279b7/volumes" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.914031 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-fnxht" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.914781 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-fnxht" event={"ID":"69d21ef3-2e4e-4226-935f-b09feb8c4d19","Type":"ContainerDied","Data":"a77b3000b5500307e850b80ff98f21106b4ac9e52237763c8f0219161d745275"} Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.914813 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a77b3000b5500307e850b80ff98f21106b4ac9e52237763c8f0219161d745275" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.923686 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" containerName="nova-scheduler-scheduler" containerID="cri-o://8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea" gracePeriod=30 Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.956816 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.958412 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.960736 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Sep 30 10:05:31 crc kubenswrapper[4970]: I0930 10:05:31.978684 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 10:05:32 crc kubenswrapper[4970]: W0930 10:05:32.073788 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb41f183c_fb01_431c_aa98_75bb7b7bb669.slice/crio-5a27be673c3d5a8f7401501259fc7fbd993fc0fbe5bce9cf1dc1d66a57dda732 WatchSource:0}: Error finding container 5a27be673c3d5a8f7401501259fc7fbd993fc0fbe5bce9cf1dc1d66a57dda732: Status 404 returned error can't find the container with id 5a27be673c3d5a8f7401501259fc7fbd993fc0fbe5bce9cf1dc1d66a57dda732 Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.074556 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.078717 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9qdh\" (UniqueName: \"kubernetes.io/projected/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-kube-api-access-j9qdh\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.078900 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.078966 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.181097 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9qdh\" (UniqueName: \"kubernetes.io/projected/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-kube-api-access-j9qdh\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.181230 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.181261 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.187180 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.187866 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.202500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9qdh\" (UniqueName: \"kubernetes.io/projected/3e334a93-12b2-402f-97e7-d5f77c7cb8bc-kube-api-access-j9qdh\") pod \"nova-cell1-conductor-0\" (UID: \"3e334a93-12b2-402f-97e7-d5f77c7cb8bc\") " pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.283907 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.732925 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Sep 30 10:05:32 crc kubenswrapper[4970]: W0930 10:05:32.737682 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e334a93_12b2_402f_97e7_d5f77c7cb8bc.slice/crio-63d5dea63a3bbb4079b7579e1f3560b4272c893a171098425f519d1b87ed0b24 WatchSource:0}: Error finding container 63d5dea63a3bbb4079b7579e1f3560b4272c893a171098425f519d1b87ed0b24: Status 404 returned error can't find the container with id 63d5dea63a3bbb4079b7579e1f3560b4272c893a171098425f519d1b87ed0b24 Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.937722 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41f183c-fb01-431c-aa98-75bb7b7bb669","Type":"ContainerStarted","Data":"38977e0598005485269cbaa549e133c231dcf4c3595ca46cd8fdca7199062970"} Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.937775 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41f183c-fb01-431c-aa98-75bb7b7bb669","Type":"ContainerStarted","Data":"f865d55dc10dfaff22d7485e134dd7508a1068ca089bef51ec8a538b2f16c6ca"} Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.937790 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41f183c-fb01-431c-aa98-75bb7b7bb669","Type":"ContainerStarted","Data":"5a27be673c3d5a8f7401501259fc7fbd993fc0fbe5bce9cf1dc1d66a57dda732"} Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.939391 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3e334a93-12b2-402f-97e7-d5f77c7cb8bc","Type":"ContainerStarted","Data":"63d5dea63a3bbb4079b7579e1f3560b4272c893a171098425f519d1b87ed0b24"} Sep 30 10:05:32 crc kubenswrapper[4970]: I0930 10:05:32.969111 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.969085851 podStartE2EDuration="1.969085851s" podCreationTimestamp="2025-09-30 10:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:32.960201228 +0000 UTC m=+1146.032052162" watchObservedRunningTime="2025-09-30 10:05:32.969085851 +0000 UTC m=+1146.040936785" Sep 30 10:05:33 crc kubenswrapper[4970]: I0930 10:05:33.948947 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3e334a93-12b2-402f-97e7-d5f77c7cb8bc","Type":"ContainerStarted","Data":"72648fbdecdaf622940b204f8be4a2bbf0ab97ffa42ac8284fc999a2cecc37dd"} Sep 30 10:05:33 crc kubenswrapper[4970]: I0930 10:05:33.967203 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.967185367 podStartE2EDuration="2.967185367s" podCreationTimestamp="2025-09-30 10:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:33.96074801 +0000 UTC m=+1147.032598944" watchObservedRunningTime="2025-09-30 10:05:33.967185367 +0000 UTC m=+1147.039036301" Sep 30 10:05:33 crc kubenswrapper[4970]: E0930 10:05:33.999784 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 10:05:34 crc kubenswrapper[4970]: E0930 10:05:34.001490 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 10:05:34 crc kubenswrapper[4970]: E0930 10:05:34.002835 4970 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Sep 30 10:05:34 crc kubenswrapper[4970]: E0930 10:05:34.002879 4970 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" containerName="nova-scheduler-scheduler" Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.209023 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.209278 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="15b0c3a5-a622-4a16-aac7-b807588c48a7" containerName="kube-state-metrics" containerID="cri-o://41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b" gracePeriod=30 Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.712753 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.842762 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-547jf\" (UniqueName: \"kubernetes.io/projected/15b0c3a5-a622-4a16-aac7-b807588c48a7-kube-api-access-547jf\") pod \"15b0c3a5-a622-4a16-aac7-b807588c48a7\" (UID: \"15b0c3a5-a622-4a16-aac7-b807588c48a7\") " Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.854099 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15b0c3a5-a622-4a16-aac7-b807588c48a7-kube-api-access-547jf" (OuterVolumeSpecName: "kube-api-access-547jf") pod "15b0c3a5-a622-4a16-aac7-b807588c48a7" (UID: "15b0c3a5-a622-4a16-aac7-b807588c48a7"). InnerVolumeSpecName "kube-api-access-547jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.945501 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-547jf\" (UniqueName: \"kubernetes.io/projected/15b0c3a5-a622-4a16-aac7-b807588c48a7-kube-api-access-547jf\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.967194 4970 generic.go:334] "Generic (PLEG): container finished" podID="15b0c3a5-a622-4a16-aac7-b807588c48a7" containerID="41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b" exitCode=2 Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.967251 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.967276 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b0c3a5-a622-4a16-aac7-b807588c48a7","Type":"ContainerDied","Data":"41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b"} Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.967306 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"15b0c3a5-a622-4a16-aac7-b807588c48a7","Type":"ContainerDied","Data":"72e7d130ea0dd730215b73f26d7d5c15ada9b25b99a5639d85fb55eb9d4dc7ca"} Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.967325 4970 scope.go:117] "RemoveContainer" containerID="41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b" Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.975766 4970 generic.go:334] "Generic (PLEG): container finished" podID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" containerID="8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea" exitCode=0 Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.977040 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d9ea2ae-a60c-42a1-acf1-7249574b296a","Type":"ContainerDied","Data":"8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea"} Sep 30 10:05:34 crc kubenswrapper[4970]: I0930 10:05:34.977130 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.009185 4970 scope.go:117] "RemoveContainer" containerID="41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b" Sep 30 10:05:35 crc kubenswrapper[4970]: E0930 10:05:35.010418 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b\": container with ID starting with 41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b not found: ID does not exist" containerID="41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.010455 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b"} err="failed to get container status \"41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b\": rpc error: code = NotFound desc = could not find container \"41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b\": container with ID starting with 41da09d3a5189f07dfc52276c16d2a0b47ca8b2f811bfa4de0354b57ed55c73b not found: ID does not exist" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.021996 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.044881 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.061356 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:05:35 crc kubenswrapper[4970]: E0930 10:05:35.061890 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15b0c3a5-a622-4a16-aac7-b807588c48a7" containerName="kube-state-metrics" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.061905 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="15b0c3a5-a622-4a16-aac7-b807588c48a7" containerName="kube-state-metrics" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.062238 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="15b0c3a5-a622-4a16-aac7-b807588c48a7" containerName="kube-state-metrics" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.063024 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.071424 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.071709 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.076890 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.141531 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.150044 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.150196 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhphf\" (UniqueName: \"kubernetes.io/projected/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-api-access-nhphf\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.150223 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.150283 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.251617 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zsc4\" (UniqueName: \"kubernetes.io/projected/3d9ea2ae-a60c-42a1-acf1-7249574b296a-kube-api-access-6zsc4\") pod \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.252481 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-combined-ca-bundle\") pod \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.252626 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-config-data\") pod \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\" (UID: \"3d9ea2ae-a60c-42a1-acf1-7249574b296a\") " Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.252959 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhphf\" (UniqueName: \"kubernetes.io/projected/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-api-access-nhphf\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.253018 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.253091 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.253140 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.258592 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.258771 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d9ea2ae-a60c-42a1-acf1-7249574b296a-kube-api-access-6zsc4" (OuterVolumeSpecName: "kube-api-access-6zsc4") pod "3d9ea2ae-a60c-42a1-acf1-7249574b296a" (UID: "3d9ea2ae-a60c-42a1-acf1-7249574b296a"). InnerVolumeSpecName "kube-api-access-6zsc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.258854 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.266210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.274663 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhphf\" (UniqueName: \"kubernetes.io/projected/fd6ccdd4-d83d-47fe-8283-b4625ad7d17f-kube-api-access-nhphf\") pod \"kube-state-metrics-0\" (UID: \"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f\") " pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.283350 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-config-data" (OuterVolumeSpecName: "config-data") pod "3d9ea2ae-a60c-42a1-acf1-7249574b296a" (UID: "3d9ea2ae-a60c-42a1-acf1-7249574b296a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.283473 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3d9ea2ae-a60c-42a1-acf1-7249574b296a" (UID: "3d9ea2ae-a60c-42a1-acf1-7249574b296a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.354860 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.354899 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d9ea2ae-a60c-42a1-acf1-7249574b296a-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.354909 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zsc4\" (UniqueName: \"kubernetes.io/projected/3d9ea2ae-a60c-42a1-acf1-7249574b296a-kube-api-access-6zsc4\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.411550 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.666863 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.679262 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15b0c3a5-a622-4a16-aac7-b807588c48a7" path="/var/lib/kubelet/pods/15b0c3a5-a622-4a16-aac7-b807588c48a7/volumes" Sep 30 10:05:35 crc kubenswrapper[4970]: W0930 10:05:35.685473 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd6ccdd4_d83d_47fe_8283_b4625ad7d17f.slice/crio-4883a208a4790329db4091249034ef7757347844880cb43c43daf579c7c2e999 WatchSource:0}: Error finding container 4883a208a4790329db4091249034ef7757347844880cb43c43daf579c7c2e999: Status 404 returned error can't find the container with id 4883a208a4790329db4091249034ef7757347844880cb43c43daf579c7c2e999 Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.979914 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.995673 4970 generic.go:334] "Generic (PLEG): container finished" podID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerID="92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e" exitCode=0 Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.995793 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df2bf7be-ff3b-4a81-a65b-bb4848d39fda","Type":"ContainerDied","Data":"92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e"} Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.995834 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"df2bf7be-ff3b-4a81-a65b-bb4848d39fda","Type":"ContainerDied","Data":"206b4aa811482087b99e7304587e5d363831ae9eedeac5e9a49246cdebdc96fd"} Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.995870 4970 scope.go:117] "RemoveContainer" containerID="92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.996072 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:05:35 crc kubenswrapper[4970]: I0930 10:05:35.998508 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f","Type":"ContainerStarted","Data":"4883a208a4790329db4091249034ef7757347844880cb43c43daf579c7c2e999"} Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.025790 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.026189 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3d9ea2ae-a60c-42a1-acf1-7249574b296a","Type":"ContainerDied","Data":"cebbd20031e6cfaa12f33d99d8914f04a6faca748890e79c0ab6b21c28068f24"} Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.036058 4970 scope.go:117] "RemoveContainer" containerID="d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.053977 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.069382 4970 scope.go:117] "RemoveContainer" containerID="92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e" Sep 30 10:05:36 crc kubenswrapper[4970]: E0930 10:05:36.070644 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e\": container with ID starting with 92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e not found: ID does not exist" containerID="92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.070671 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e"} err="failed to get container status \"92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e\": rpc error: code = NotFound desc = could not find container \"92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e\": container with ID starting with 92d4ae8af6c4f804796ee48a49db84d816db6b2758818a5d3f38145ce3fd2f1e not found: ID does not exist" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.070692 4970 scope.go:117] "RemoveContainer" containerID="d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19" Sep 30 10:05:36 crc kubenswrapper[4970]: E0930 10:05:36.072004 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19\": container with ID starting with d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19 not found: ID does not exist" containerID="d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.072035 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19"} err="failed to get container status \"d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19\": rpc error: code = NotFound desc = could not find container \"d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19\": container with ID starting with d862fef4f544f4fec3fcf2b4857fed7a7232e431f76598d21d1f2fddf83fff19 not found: ID does not exist" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.072080 4970 scope.go:117] "RemoveContainer" containerID="8a4497583dea87ec98ad2b1307d06fa9abb960994e7c90765bc34c9106cb14ea" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.072777 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-combined-ca-bundle\") pod \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.072836 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data\") pod \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.072951 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdj6w\" (UniqueName: \"kubernetes.io/projected/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-kube-api-access-sdj6w\") pod \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.073027 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-logs\") pod \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.073825 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-logs" (OuterVolumeSpecName: "logs") pod "df2bf7be-ff3b-4a81-a65b-bb4848d39fda" (UID: "df2bf7be-ff3b-4a81-a65b-bb4848d39fda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.080024 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.082597 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-kube-api-access-sdj6w" (OuterVolumeSpecName: "kube-api-access-sdj6w") pod "df2bf7be-ff3b-4a81-a65b-bb4848d39fda" (UID: "df2bf7be-ff3b-4a81-a65b-bb4848d39fda"). InnerVolumeSpecName "kube-api-access-sdj6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092143 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: E0930 10:05:36.092615 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" containerName="nova-scheduler-scheduler" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092632 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" containerName="nova-scheduler-scheduler" Sep 30 10:05:36 crc kubenswrapper[4970]: E0930 10:05:36.092651 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-api" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092657 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-api" Sep 30 10:05:36 crc kubenswrapper[4970]: E0930 10:05:36.092678 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-log" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092684 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-log" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092869 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-log" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092881 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" containerName="nova-api-api" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.092909 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" containerName="nova-scheduler-scheduler" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.093599 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.098302 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.115264 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: E0930 10:05:36.117533 4970 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data podName:df2bf7be-ff3b-4a81-a65b-bb4848d39fda nodeName:}" failed. No retries permitted until 2025-09-30 10:05:36.617503122 +0000 UTC m=+1149.689354056 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config-data" (UniqueName: "kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data") pod "df2bf7be-ff3b-4a81-a65b-bb4848d39fda" (UID: "df2bf7be-ff3b-4a81-a65b-bb4848d39fda") : error deleting /var/lib/kubelet/pods/df2bf7be-ff3b-4a81-a65b-bb4848d39fda/volume-subpaths: remove /var/lib/kubelet/pods/df2bf7be-ff3b-4a81-a65b-bb4848d39fda/volume-subpaths: no such file or directory Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.124880 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df2bf7be-ff3b-4a81-a65b-bb4848d39fda" (UID: "df2bf7be-ff3b-4a81-a65b-bb4848d39fda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.174704 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-config-data\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.174770 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.174810 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbnnp\" (UniqueName: \"kubernetes.io/projected/da69c255-8862-483f-befc-490c59e22b8d-kube-api-access-qbnnp\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.175060 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.175080 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdj6w\" (UniqueName: \"kubernetes.io/projected/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-kube-api-access-sdj6w\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.175090 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.276550 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-config-data\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.276875 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.277021 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbnnp\" (UniqueName: \"kubernetes.io/projected/da69c255-8862-483f-befc-490c59e22b8d-kube-api-access-qbnnp\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.279509 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.279901 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-central-agent" containerID="cri-o://d4d2c75f858b63c1613dc019e69124b3257eff29bdcabe41dc96809ff0422a50" gracePeriod=30 Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.280087 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="proxy-httpd" containerID="cri-o://8cc699b8753313310e417e756b2090a09fe9d2dbb186511ac8427eb601af34f2" gracePeriod=30 Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.280140 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="sg-core" containerID="cri-o://f741b787d7088767b2694754341aea106b51e5c15900cc3597836247f93cd6bc" gracePeriod=30 Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.280184 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-notification-agent" containerID="cri-o://08e44e1342cca3e7b11350365c8ec8c853c6e6676145e0a847433f07fc3dcacc" gracePeriod=30 Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.280589 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.281120 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-config-data\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.314963 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbnnp\" (UniqueName: \"kubernetes.io/projected/da69c255-8862-483f-befc-490c59e22b8d-kube-api-access-qbnnp\") pod \"nova-scheduler-0\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.422475 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.639613 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.639901 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.698049 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data\") pod \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\" (UID: \"df2bf7be-ff3b-4a81-a65b-bb4848d39fda\") " Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.705566 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data" (OuterVolumeSpecName: "config-data") pod "df2bf7be-ff3b-4a81-a65b-bb4848d39fda" (UID: "df2bf7be-ff3b-4a81-a65b-bb4848d39fda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.801356 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df2bf7be-ff3b-4a81-a65b-bb4848d39fda-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.937313 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.950594 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.960444 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.962412 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.965051 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 10:05:36 crc kubenswrapper[4970]: I0930 10:05:36.973440 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.049664 4970 generic.go:334] "Generic (PLEG): container finished" podID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerID="8cc699b8753313310e417e756b2090a09fe9d2dbb186511ac8427eb601af34f2" exitCode=0 Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.049694 4970 generic.go:334] "Generic (PLEG): container finished" podID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerID="f741b787d7088767b2694754341aea106b51e5c15900cc3597836247f93cd6bc" exitCode=2 Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.049703 4970 generic.go:334] "Generic (PLEG): container finished" podID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerID="d4d2c75f858b63c1613dc019e69124b3257eff29bdcabe41dc96809ff0422a50" exitCode=0 Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.049742 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerDied","Data":"8cc699b8753313310e417e756b2090a09fe9d2dbb186511ac8427eb601af34f2"} Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.049774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerDied","Data":"f741b787d7088767b2694754341aea106b51e5c15900cc3597836247f93cd6bc"} Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.049785 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerDied","Data":"d4d2c75f858b63c1613dc019e69124b3257eff29bdcabe41dc96809ff0422a50"} Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.055301 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"fd6ccdd4-d83d-47fe-8283-b4625ad7d17f","Type":"ContainerStarted","Data":"c7a97e206a540fc3db7d2fca58c8fb84dc770e81a3ceb553517badd1c196cba3"} Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.056383 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.057101 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.083483 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.572321222 podStartE2EDuration="2.083444476s" podCreationTimestamp="2025-09-30 10:05:35 +0000 UTC" firstStartedPulling="2025-09-30 10:05:35.688221252 +0000 UTC m=+1148.760072186" lastFinishedPulling="2025-09-30 10:05:36.199344506 +0000 UTC m=+1149.271195440" observedRunningTime="2025-09-30 10:05:37.069606726 +0000 UTC m=+1150.141457660" watchObservedRunningTime="2025-09-30 10:05:37.083444476 +0000 UTC m=+1150.155295400" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.108508 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83856b8-cf2e-439a-b243-9834c90ada96-logs\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.108597 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjnh6\" (UniqueName: \"kubernetes.io/projected/d83856b8-cf2e-439a-b243-9834c90ada96-kube-api-access-mjnh6\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.108622 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.108680 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-config-data\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.210697 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83856b8-cf2e-439a-b243-9834c90ada96-logs\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.210801 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjnh6\" (UniqueName: \"kubernetes.io/projected/d83856b8-cf2e-439a-b243-9834c90ada96-kube-api-access-mjnh6\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.210843 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.210906 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-config-data\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.212295 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83856b8-cf2e-439a-b243-9834c90ada96-logs\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.215934 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.216763 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-config-data\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.227850 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjnh6\" (UniqueName: \"kubernetes.io/projected/d83856b8-cf2e-439a-b243-9834c90ada96-kube-api-access-mjnh6\") pod \"nova-api-0\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.297445 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.321533 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.681913 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d9ea2ae-a60c-42a1-acf1-7249574b296a" path="/var/lib/kubelet/pods/3d9ea2ae-a60c-42a1-acf1-7249574b296a/volumes" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.682964 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df2bf7be-ff3b-4a81-a65b-bb4848d39fda" path="/var/lib/kubelet/pods/df2bf7be-ff3b-4a81-a65b-bb4848d39fda/volumes" Sep 30 10:05:37 crc kubenswrapper[4970]: I0930 10:05:37.766029 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:05:38 crc kubenswrapper[4970]: I0930 10:05:38.064548 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da69c255-8862-483f-befc-490c59e22b8d","Type":"ContainerStarted","Data":"c0abc0449e9b19d1d393344e411e0d079dcd58bfb849bc523aac496d6f2e885d"} Sep 30 10:05:38 crc kubenswrapper[4970]: I0930 10:05:38.064807 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da69c255-8862-483f-befc-490c59e22b8d","Type":"ContainerStarted","Data":"d2722dec05ed33a95349240131e30ed7c52aed1792cdd2658eb380e935b68a4e"} Sep 30 10:05:38 crc kubenswrapper[4970]: I0930 10:05:38.066340 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83856b8-cf2e-439a-b243-9834c90ada96","Type":"ContainerStarted","Data":"e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e"} Sep 30 10:05:38 crc kubenswrapper[4970]: I0930 10:05:38.066383 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83856b8-cf2e-439a-b243-9834c90ada96","Type":"ContainerStarted","Data":"e5f74deabdbc0504f0a46e1da0d1e0dcf8054f8995b2751bf80958de58072a7f"} Sep 30 10:05:38 crc kubenswrapper[4970]: I0930 10:05:38.093550 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.093526809 podStartE2EDuration="2.093526809s" podCreationTimestamp="2025-09-30 10:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:38.083545156 +0000 UTC m=+1151.155396100" watchObservedRunningTime="2025-09-30 10:05:38.093526809 +0000 UTC m=+1151.165377753" Sep 30 10:05:39 crc kubenswrapper[4970]: I0930 10:05:39.079771 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83856b8-cf2e-439a-b243-9834c90ada96","Type":"ContainerStarted","Data":"f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997"} Sep 30 10:05:39 crc kubenswrapper[4970]: I0930 10:05:39.109986 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.109966297 podStartE2EDuration="3.109966297s" podCreationTimestamp="2025-09-30 10:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:39.10426581 +0000 UTC m=+1152.176116744" watchObservedRunningTime="2025-09-30 10:05:39.109966297 +0000 UTC m=+1152.181817231" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.097566 4970 generic.go:334] "Generic (PLEG): container finished" podID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerID="08e44e1342cca3e7b11350365c8ec8c853c6e6676145e0a847433f07fc3dcacc" exitCode=0 Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.097630 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerDied","Data":"08e44e1342cca3e7b11350365c8ec8c853c6e6676145e0a847433f07fc3dcacc"} Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.299430 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.376947 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-config-data\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.377052 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-sg-core-conf-yaml\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.377107 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-scripts\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.377213 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-run-httpd\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.377322 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-combined-ca-bundle\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.377363 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-log-httpd\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.377449 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr29w\" (UniqueName: \"kubernetes.io/projected/a2e66a43-20be-4a00-bb74-843e6dd7af44-kube-api-access-tr29w\") pod \"a2e66a43-20be-4a00-bb74-843e6dd7af44\" (UID: \"a2e66a43-20be-4a00-bb74-843e6dd7af44\") " Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.387663 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2e66a43-20be-4a00-bb74-843e6dd7af44-kube-api-access-tr29w" (OuterVolumeSpecName: "kube-api-access-tr29w") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "kube-api-access-tr29w". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.388125 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.388400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.389173 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-scripts" (OuterVolumeSpecName: "scripts") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.430193 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.480214 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr29w\" (UniqueName: \"kubernetes.io/projected/a2e66a43-20be-4a00-bb74-843e6dd7af44-kube-api-access-tr29w\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.480250 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.480262 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.480272 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.480282 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a2e66a43-20be-4a00-bb74-843e6dd7af44-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.487928 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.504595 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-config-data" (OuterVolumeSpecName: "config-data") pod "a2e66a43-20be-4a00-bb74-843e6dd7af44" (UID: "a2e66a43-20be-4a00-bb74-843e6dd7af44"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.582433 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:40 crc kubenswrapper[4970]: I0930 10:05:40.582470 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2e66a43-20be-4a00-bb74-843e6dd7af44-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.108978 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a2e66a43-20be-4a00-bb74-843e6dd7af44","Type":"ContainerDied","Data":"2948ec68aff5f05d01a55ce6ca6f1c9da4d973d9ccb60f98bad3e30f7b24ed56"} Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.109090 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.109682 4970 scope.go:117] "RemoveContainer" containerID="8cc699b8753313310e417e756b2090a09fe9d2dbb186511ac8427eb601af34f2" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.141372 4970 scope.go:117] "RemoveContainer" containerID="f741b787d7088767b2694754341aea106b51e5c15900cc3597836247f93cd6bc" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.144221 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.155097 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.165049 4970 scope.go:117] "RemoveContainer" containerID="08e44e1342cca3e7b11350365c8ec8c853c6e6676145e0a847433f07fc3dcacc" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171064 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:41 crc kubenswrapper[4970]: E0930 10:05:41.171459 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="proxy-httpd" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171479 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="proxy-httpd" Sep 30 10:05:41 crc kubenswrapper[4970]: E0930 10:05:41.171502 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-central-agent" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171510 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-central-agent" Sep 30 10:05:41 crc kubenswrapper[4970]: E0930 10:05:41.171529 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-notification-agent" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171536 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-notification-agent" Sep 30 10:05:41 crc kubenswrapper[4970]: E0930 10:05:41.171553 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="sg-core" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171559 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="sg-core" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171731 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="sg-core" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171746 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-notification-agent" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171756 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="proxy-httpd" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.171768 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" containerName="ceilometer-central-agent" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.173342 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.176177 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.176343 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.176692 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.198508 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.199195 4970 scope.go:117] "RemoveContainer" containerID="d4d2c75f858b63c1613dc019e69124b3257eff29bdcabe41dc96809ff0422a50" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.294612 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.294660 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mb4f\" (UniqueName: \"kubernetes.io/projected/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-kube-api-access-6mb4f\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.294706 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.294859 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-scripts\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.294945 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.295196 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-run-httpd\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.295329 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-config-data\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.295366 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-log-httpd\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397239 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-config-data\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397574 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-log-httpd\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397605 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397624 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mb4f\" (UniqueName: \"kubernetes.io/projected/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-kube-api-access-6mb4f\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397665 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397703 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-scripts\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397732 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.397788 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-run-httpd\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.398206 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-run-httpd\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.398597 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-log-httpd\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.401752 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.402314 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.402509 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.414359 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-scripts\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.415817 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mb4f\" (UniqueName: \"kubernetes.io/projected/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-kube-api-access-6mb4f\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.416557 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-config-data\") pod \"ceilometer-0\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.423718 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.499886 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.638372 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.638843 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.683282 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2e66a43-20be-4a00-bb74-843e6dd7af44" path="/var/lib/kubelet/pods/a2e66a43-20be-4a00-bb74-843e6dd7af44/volumes" Sep 30 10:05:41 crc kubenswrapper[4970]: I0930 10:05:41.951511 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:05:42 crc kubenswrapper[4970]: I0930 10:05:42.122179 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerStarted","Data":"174e8447d0aa87d34e7a004547b21f6871866ae9a36f9f74843146cdad215c72"} Sep 30 10:05:42 crc kubenswrapper[4970]: I0930 10:05:42.653276 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 10:05:42 crc kubenswrapper[4970]: I0930 10:05:42.654387 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 10:05:43 crc kubenswrapper[4970]: I0930 10:05:43.133585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerStarted","Data":"d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb"} Sep 30 10:05:44 crc kubenswrapper[4970]: I0930 10:05:44.143384 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerStarted","Data":"88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74"} Sep 30 10:05:45 crc kubenswrapper[4970]: I0930 10:05:45.152566 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerStarted","Data":"c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c"} Sep 30 10:05:45 crc kubenswrapper[4970]: I0930 10:05:45.428449 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Sep 30 10:05:46 crc kubenswrapper[4970]: I0930 10:05:46.164462 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerStarted","Data":"8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b"} Sep 30 10:05:46 crc kubenswrapper[4970]: I0930 10:05:46.164794 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:05:46 crc kubenswrapper[4970]: I0930 10:05:46.199552 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.783644869 podStartE2EDuration="5.199469361s" podCreationTimestamp="2025-09-30 10:05:41 +0000 UTC" firstStartedPulling="2025-09-30 10:05:41.957878868 +0000 UTC m=+1155.029729812" lastFinishedPulling="2025-09-30 10:05:45.37370335 +0000 UTC m=+1158.445554304" observedRunningTime="2025-09-30 10:05:46.184575832 +0000 UTC m=+1159.256426766" watchObservedRunningTime="2025-09-30 10:05:46.199469361 +0000 UTC m=+1159.271320285" Sep 30 10:05:46 crc kubenswrapper[4970]: I0930 10:05:46.424274 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 10:05:46 crc kubenswrapper[4970]: I0930 10:05:46.471393 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 10:05:47 crc kubenswrapper[4970]: I0930 10:05:47.204253 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 10:05:47 crc kubenswrapper[4970]: I0930 10:05:47.297746 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 10:05:47 crc kubenswrapper[4970]: I0930 10:05:47.297796 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 10:05:48 crc kubenswrapper[4970]: I0930 10:05:48.381214 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 10:05:48 crc kubenswrapper[4970]: I0930 10:05:48.381255 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.201:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Sep 30 10:05:51 crc kubenswrapper[4970]: I0930 10:05:51.645417 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 10:05:51 crc kubenswrapper[4970]: I0930 10:05:51.646505 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 10:05:51 crc kubenswrapper[4970]: I0930 10:05:51.652295 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 10:05:52 crc kubenswrapper[4970]: I0930 10:05:52.226042 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.224290 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.238464 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" containerID="b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619" exitCode=137 Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.239244 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.240108 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51","Type":"ContainerDied","Data":"b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619"} Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.240202 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51","Type":"ContainerDied","Data":"5b2dde908877358793ee3abef161e2489cb8b47f07926dc9892ae0402493a203"} Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.240232 4970 scope.go:117] "RemoveContainer" containerID="b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.269461 4970 scope.go:117] "RemoveContainer" containerID="b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619" Sep 30 10:05:54 crc kubenswrapper[4970]: E0930 10:05:54.270491 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619\": container with ID starting with b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619 not found: ID does not exist" containerID="b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.270558 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619"} err="failed to get container status \"b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619\": rpc error: code = NotFound desc = could not find container \"b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619\": container with ID starting with b94e9e79b2ee92f89ea388e4eb13b77cc1f7b1c031c914f5a2365d00519ee619 not found: ID does not exist" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.311750 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxhcg\" (UniqueName: \"kubernetes.io/projected/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-kube-api-access-vxhcg\") pod \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.311809 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-config-data\") pod \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.311977 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-combined-ca-bundle\") pod \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\" (UID: \"f4cbd3e2-b978-4c95-b485-abfd0f9ecc51\") " Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.325371 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-kube-api-access-vxhcg" (OuterVolumeSpecName: "kube-api-access-vxhcg") pod "f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" (UID: "f4cbd3e2-b978-4c95-b485-abfd0f9ecc51"). InnerVolumeSpecName "kube-api-access-vxhcg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.341520 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-config-data" (OuterVolumeSpecName: "config-data") pod "f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" (UID: "f4cbd3e2-b978-4c95-b485-abfd0f9ecc51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.346177 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" (UID: "f4cbd3e2-b978-4c95-b485-abfd0f9ecc51"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.413866 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxhcg\" (UniqueName: \"kubernetes.io/projected/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-kube-api-access-vxhcg\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.413908 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.413923 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.592183 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.602712 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.613350 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:54 crc kubenswrapper[4970]: E0930 10:05:54.613783 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.613806 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.614096 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" containerName="nova-cell1-novncproxy-novncproxy" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.614870 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.624969 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.625195 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.625216 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.625195 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.718341 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.718384 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.718443 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/e81a0f38-b543-4aa5-aef9-fc02f91800e5-kube-api-access-59d6q\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.718469 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.718488 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.820688 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.820741 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.820874 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.820893 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.820947 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/e81a0f38-b543-4aa5-aef9-fc02f91800e5-kube-api-access-59d6q\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.828055 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.828788 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.828465 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.828444 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/e81a0f38-b543-4aa5-aef9-fc02f91800e5-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.844690 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59d6q\" (UniqueName: \"kubernetes.io/projected/e81a0f38-b543-4aa5-aef9-fc02f91800e5-kube-api-access-59d6q\") pod \"nova-cell1-novncproxy-0\" (UID: \"e81a0f38-b543-4aa5-aef9-fc02f91800e5\") " pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:54 crc kubenswrapper[4970]: I0930 10:05:54.930779 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:05:55 crc kubenswrapper[4970]: I0930 10:05:55.388802 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Sep 30 10:05:55 crc kubenswrapper[4970]: I0930 10:05:55.680002 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4cbd3e2-b978-4c95-b485-abfd0f9ecc51" path="/var/lib/kubelet/pods/f4cbd3e2-b978-4c95-b485-abfd0f9ecc51/volumes" Sep 30 10:05:56 crc kubenswrapper[4970]: I0930 10:05:56.274889 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e81a0f38-b543-4aa5-aef9-fc02f91800e5","Type":"ContainerStarted","Data":"af88ccf2a487d1e98eb3b1f2ec666337cb8d57e4e621a6d2fa10d822c2c664b3"} Sep 30 10:05:56 crc kubenswrapper[4970]: I0930 10:05:56.274933 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e81a0f38-b543-4aa5-aef9-fc02f91800e5","Type":"ContainerStarted","Data":"5a483ab850caac47cb90009ea5cebc0cb98ed0b0c0eb6d74954cb7695d959324"} Sep 30 10:05:56 crc kubenswrapper[4970]: I0930 10:05:56.313046 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.313014025 podStartE2EDuration="2.313014025s" podCreationTimestamp="2025-09-30 10:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:05:56.301598392 +0000 UTC m=+1169.373449336" watchObservedRunningTime="2025-09-30 10:05:56.313014025 +0000 UTC m=+1169.384864959" Sep 30 10:05:57 crc kubenswrapper[4970]: I0930 10:05:57.330354 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 10:05:57 crc kubenswrapper[4970]: I0930 10:05:57.331060 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 10:05:57 crc kubenswrapper[4970]: I0930 10:05:57.335397 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 10:05:57 crc kubenswrapper[4970]: I0930 10:05:57.337770 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.325066 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.333374 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.519052 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-ksk7j"] Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.520546 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.534765 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-ksk7j"] Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.592319 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.592372 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw58j\" (UniqueName: \"kubernetes.io/projected/729849c2-5efc-43c1-841e-a971a5739723-kube-api-access-pw58j\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.592395 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-config\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.592437 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.592476 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.592498 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.694494 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.694567 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw58j\" (UniqueName: \"kubernetes.io/projected/729849c2-5efc-43c1-841e-a971a5739723-kube-api-access-pw58j\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.694614 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-config\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.694752 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.694805 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.694839 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.695626 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-config\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.695643 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.695643 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.695889 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.696218 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.718595 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw58j\" (UniqueName: \"kubernetes.io/projected/729849c2-5efc-43c1-841e-a971a5739723-kube-api-access-pw58j\") pod \"dnsmasq-dns-5c7b6c5df9-ksk7j\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:58 crc kubenswrapper[4970]: I0930 10:05:58.849756 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:05:59 crc kubenswrapper[4970]: I0930 10:05:59.166738 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-ksk7j"] Sep 30 10:05:59 crc kubenswrapper[4970]: I0930 10:05:59.338273 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" event={"ID":"729849c2-5efc-43c1-841e-a971a5739723","Type":"ContainerStarted","Data":"e4342cc9ddeda49df0bfb374630254b8416f90968624bae13343a1465375543e"} Sep 30 10:05:59 crc kubenswrapper[4970]: I0930 10:05:59.932352 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.348909 4970 generic.go:334] "Generic (PLEG): container finished" podID="729849c2-5efc-43c1-841e-a971a5739723" containerID="039b546644d8b91b940fc49386206766abfd36717f8c1a45e8173dd303793bb9" exitCode=0 Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.349005 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" event={"ID":"729849c2-5efc-43c1-841e-a971a5739723","Type":"ContainerDied","Data":"039b546644d8b91b940fc49386206766abfd36717f8c1a45e8173dd303793bb9"} Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.714231 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.714797 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-central-agent" containerID="cri-o://d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb" gracePeriod=30 Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.714886 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="sg-core" containerID="cri-o://c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c" gracePeriod=30 Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.714910 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="proxy-httpd" containerID="cri-o://8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b" gracePeriod=30 Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.714913 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-notification-agent" containerID="cri-o://88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74" gracePeriod=30 Sep 30 10:06:00 crc kubenswrapper[4970]: I0930 10:06:00.719430 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.202:3000/\": read tcp 10.217.0.2:50986->10.217.0.202:3000: read: connection reset by peer" Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.149807 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.367161 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerID="8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b" exitCode=0 Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.368267 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerID="c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c" exitCode=2 Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.368456 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerID="d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb" exitCode=0 Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.367226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerDied","Data":"8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b"} Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.368842 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerDied","Data":"c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c"} Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.369026 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerDied","Data":"d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb"} Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.372727 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" event={"ID":"729849c2-5efc-43c1-841e-a971a5739723","Type":"ContainerStarted","Data":"c79b00fa9fa4db26f56d8b94fecccb75e09385d347669435f249c30097b62581"} Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.372834 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-log" containerID="cri-o://e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e" gracePeriod=30 Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.373013 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-api" containerID="cri-o://f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997" gracePeriod=30 Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.373883 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:06:01 crc kubenswrapper[4970]: I0930 10:06:01.399402 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" podStartSLOduration=3.399373559 podStartE2EDuration="3.399373559s" podCreationTimestamp="2025-09-30 10:05:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:06:01.394902666 +0000 UTC m=+1174.466753600" watchObservedRunningTime="2025-09-30 10:06:01.399373559 +0000 UTC m=+1174.471224513" Sep 30 10:06:02 crc kubenswrapper[4970]: I0930 10:06:02.388929 4970 generic.go:334] "Generic (PLEG): container finished" podID="d83856b8-cf2e-439a-b243-9834c90ada96" containerID="e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e" exitCode=143 Sep 30 10:06:02 crc kubenswrapper[4970]: I0930 10:06:02.389054 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83856b8-cf2e-439a-b243-9834c90ada96","Type":"ContainerDied","Data":"e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e"} Sep 30 10:06:04 crc kubenswrapper[4970]: I0930 10:06:04.931547 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:06:04 crc kubenswrapper[4970]: I0930 10:06:04.951512 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.138443 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.244820 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-config-data\") pod \"d83856b8-cf2e-439a-b243-9834c90ada96\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.245138 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83856b8-cf2e-439a-b243-9834c90ada96-logs\") pod \"d83856b8-cf2e-439a-b243-9834c90ada96\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.245231 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjnh6\" (UniqueName: \"kubernetes.io/projected/d83856b8-cf2e-439a-b243-9834c90ada96-kube-api-access-mjnh6\") pod \"d83856b8-cf2e-439a-b243-9834c90ada96\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.245319 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-combined-ca-bundle\") pod \"d83856b8-cf2e-439a-b243-9834c90ada96\" (UID: \"d83856b8-cf2e-439a-b243-9834c90ada96\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.245658 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83856b8-cf2e-439a-b243-9834c90ada96-logs" (OuterVolumeSpecName: "logs") pod "d83856b8-cf2e-439a-b243-9834c90ada96" (UID: "d83856b8-cf2e-439a-b243-9834c90ada96"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.245981 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d83856b8-cf2e-439a-b243-9834c90ada96-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.254494 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83856b8-cf2e-439a-b243-9834c90ada96-kube-api-access-mjnh6" (OuterVolumeSpecName: "kube-api-access-mjnh6") pod "d83856b8-cf2e-439a-b243-9834c90ada96" (UID: "d83856b8-cf2e-439a-b243-9834c90ada96"). InnerVolumeSpecName "kube-api-access-mjnh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.278261 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d83856b8-cf2e-439a-b243-9834c90ada96" (UID: "d83856b8-cf2e-439a-b243-9834c90ada96"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.283353 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-config-data" (OuterVolumeSpecName: "config-data") pod "d83856b8-cf2e-439a-b243-9834c90ada96" (UID: "d83856b8-cf2e-439a-b243-9834c90ada96"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.326635 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.347245 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjnh6\" (UniqueName: \"kubernetes.io/projected/d83856b8-cf2e-439a-b243-9834c90ada96-kube-api-access-mjnh6\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.347277 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.347286 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d83856b8-cf2e-439a-b243-9834c90ada96-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.423947 4970 generic.go:334] "Generic (PLEG): container finished" podID="d83856b8-cf2e-439a-b243-9834c90ada96" containerID="f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997" exitCode=0 Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.424030 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83856b8-cf2e-439a-b243-9834c90ada96","Type":"ContainerDied","Data":"f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997"} Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.424058 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d83856b8-cf2e-439a-b243-9834c90ada96","Type":"ContainerDied","Data":"e5f74deabdbc0504f0a46e1da0d1e0dcf8054f8995b2751bf80958de58072a7f"} Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.424075 4970 scope.go:117] "RemoveContainer" containerID="f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.424201 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.437332 4970 generic.go:334] "Generic (PLEG): container finished" podID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerID="88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74" exitCode=0 Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.438314 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.439383 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerDied","Data":"88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74"} Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.439428 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d","Type":"ContainerDied","Data":"174e8447d0aa87d34e7a004547b21f6871866ae9a36f9f74843146cdad215c72"} Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449062 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-log-httpd\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449159 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-scripts\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449202 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-sg-core-conf-yaml\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449256 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mb4f\" (UniqueName: \"kubernetes.io/projected/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-kube-api-access-6mb4f\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449292 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-ceilometer-tls-certs\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449348 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-config-data\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449404 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-run-httpd\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449452 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-combined-ca-bundle\") pod \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\" (UID: \"0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d\") " Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.449784 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.450688 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.457214 4970 scope.go:117] "RemoveContainer" containerID="e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.464945 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-scripts" (OuterVolumeSpecName: "scripts") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.470115 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.474047 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.478285 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.481747 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-kube-api-access-6mb4f" (OuterVolumeSpecName: "kube-api-access-6mb4f") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "kube-api-access-6mb4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.495181 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.509935 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.510692 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="proxy-httpd" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.510808 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="proxy-httpd" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.511013 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-log" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.511105 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-log" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.511188 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-api" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.511274 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-api" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.511358 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-notification-agent" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.511434 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-notification-agent" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.511541 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="sg-core" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.511623 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="sg-core" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.511702 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-central-agent" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.511772 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-central-agent" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.512114 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-notification-agent" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.512181 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="ceilometer-central-agent" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.512248 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="sg-core" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.512307 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-log" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.512370 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" containerName="proxy-httpd" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.512438 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" containerName="nova-api-api" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.513554 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.514191 4970 scope.go:117] "RemoveContainer" containerID="f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.515815 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997\": container with ID starting with f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997 not found: ID does not exist" containerID="f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.515869 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997"} err="failed to get container status \"f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997\": rpc error: code = NotFound desc = could not find container \"f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997\": container with ID starting with f7cf90da35f719a915704a209454ddf64bb920011887b7cf2527105eb5716997 not found: ID does not exist" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.515901 4970 scope.go:117] "RemoveContainer" containerID="e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.516144 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.516371 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.516500 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e\": container with ID starting with e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e not found: ID does not exist" containerID="e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.516632 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e"} err="failed to get container status \"e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e\": rpc error: code = NotFound desc = could not find container \"e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e\": container with ID starting with e8be1abad3f3a2a9175801f5c8ea2ba3df7145c43ad5e3e8f884e5db37daf15e not found: ID does not exist" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.516719 4970 scope.go:117] "RemoveContainer" containerID="8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.522263 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.528357 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.534134 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552220 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94sw9\" (UniqueName: \"kubernetes.io/projected/eb942b20-48f2-45c2-917f-d20a77fd2ada-kube-api-access-94sw9\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552300 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb942b20-48f2-45c2-917f-d20a77fd2ada-logs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552360 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552376 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552431 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552480 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-config-data\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552575 4970 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552585 4970 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-run-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552594 4970 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-log-httpd\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552601 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552609 4970 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.552617 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mb4f\" (UniqueName: \"kubernetes.io/projected/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-kube-api-access-6mb4f\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.558704 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.559745 4970 scope.go:117] "RemoveContainer" containerID="c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.582807 4970 scope.go:117] "RemoveContainer" containerID="88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.603907 4970 scope.go:117] "RemoveContainer" containerID="d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.623725 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-config-data" (OuterVolumeSpecName: "config-data") pod "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" (UID: "0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.654710 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-config-data\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.654879 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94sw9\" (UniqueName: \"kubernetes.io/projected/eb942b20-48f2-45c2-917f-d20a77fd2ada-kube-api-access-94sw9\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.654950 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb942b20-48f2-45c2-917f-d20a77fd2ada-logs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.655101 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.655135 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.655217 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.655299 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.655319 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.655924 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb942b20-48f2-45c2-917f-d20a77fd2ada-logs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.659210 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-config-data\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.659799 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-internal-tls-certs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.666567 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.668441 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-public-tls-certs\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.681235 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83856b8-cf2e-439a-b243-9834c90ada96" path="/var/lib/kubelet/pods/d83856b8-cf2e-439a-b243-9834c90ada96/volumes" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.682476 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-sv252"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.684031 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.685718 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.686127 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.695216 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv252"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.728136 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94sw9\" (UniqueName: \"kubernetes.io/projected/eb942b20-48f2-45c2-917f-d20a77fd2ada-kube-api-access-94sw9\") pod \"nova-api-0\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.756457 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-config-data\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.756523 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.756617 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-scripts\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.757244 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jtm\" (UniqueName: \"kubernetes.io/projected/05036766-e1fb-4fee-b4bc-5ff318fa9793-kube-api-access-s4jtm\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.776069 4970 scope.go:117] "RemoveContainer" containerID="8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.776758 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b\": container with ID starting with 8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b not found: ID does not exist" containerID="8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.776817 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b"} err="failed to get container status \"8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b\": rpc error: code = NotFound desc = could not find container \"8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b\": container with ID starting with 8b4f62cfa1dda219fbdf2f98d9f51e3c799233511ab181da76ca17886e87253b not found: ID does not exist" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.776859 4970 scope.go:117] "RemoveContainer" containerID="c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.777334 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c\": container with ID starting with c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c not found: ID does not exist" containerID="c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.777380 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c"} err="failed to get container status \"c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c\": rpc error: code = NotFound desc = could not find container \"c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c\": container with ID starting with c1049488add4f033be8353b45c8000bf7678b7435f5ed2bca51590a54336186c not found: ID does not exist" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.777416 4970 scope.go:117] "RemoveContainer" containerID="88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.777712 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74\": container with ID starting with 88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74 not found: ID does not exist" containerID="88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.777740 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74"} err="failed to get container status \"88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74\": rpc error: code = NotFound desc = could not find container \"88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74\": container with ID starting with 88b4f452cb753ebfe5c4d8139d5800b522b73d898306db776cb5d8aa98667d74 not found: ID does not exist" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.777759 4970 scope.go:117] "RemoveContainer" containerID="d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb" Sep 30 10:06:05 crc kubenswrapper[4970]: E0930 10:06:05.778062 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb\": container with ID starting with d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb not found: ID does not exist" containerID="d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.778083 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb"} err="failed to get container status \"d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb\": rpc error: code = NotFound desc = could not find container \"d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb\": container with ID starting with d3370ca5f05ad922c2c104c88421c90766feae581fdbb2ce7d23051040dfe6bb not found: ID does not exist" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.804566 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.814490 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.824674 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.826899 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.829077 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.829183 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.830034 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.844277 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.856057 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.864560 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.864752 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jtm\" (UniqueName: \"kubernetes.io/projected/05036766-e1fb-4fee-b4bc-5ff318fa9793-kube-api-access-s4jtm\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.864882 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-scripts\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865190 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-config-data\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865248 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-run-httpd\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865297 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865517 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-config-data\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865573 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-log-httpd\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865596 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865667 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865761 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-scripts\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.865784 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd6ch\" (UniqueName: \"kubernetes.io/projected/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-kube-api-access-vd6ch\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.872605 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-config-data\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.873797 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-scripts\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.882470 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.885654 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jtm\" (UniqueName: \"kubernetes.io/projected/05036766-e1fb-4fee-b4bc-5ff318fa9793-kube-api-access-s4jtm\") pod \"nova-cell1-cell-mapping-sv252\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971401 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-config-data\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971738 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-run-httpd\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971766 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971832 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-log-httpd\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971850 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971879 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd6ch\" (UniqueName: \"kubernetes.io/projected/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-kube-api-access-vd6ch\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971915 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.971951 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-scripts\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.973916 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-log-httpd\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.973956 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-run-httpd\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.977433 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-config-data\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.980093 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.980261 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.980342 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.984141 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-scripts\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:05 crc kubenswrapper[4970]: I0930 10:06:05.991292 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd6ch\" (UniqueName: \"kubernetes.io/projected/3dc8250d-7e71-408b-8aa3-947ebe6ef0a1-kube-api-access-vd6ch\") pod \"ceilometer-0\" (UID: \"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1\") " pod="openstack/ceilometer-0" Sep 30 10:06:06 crc kubenswrapper[4970]: I0930 10:06:06.080524 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:06 crc kubenswrapper[4970]: I0930 10:06:06.160490 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Sep 30 10:06:06 crc kubenswrapper[4970]: I0930 10:06:06.332106 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:06 crc kubenswrapper[4970]: I0930 10:06:06.473479 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb942b20-48f2-45c2-917f-d20a77fd2ada","Type":"ContainerStarted","Data":"5ffdbd929dbec9440867254b4f235c955b0dfd539f3b4feebcc46734e911728f"} Sep 30 10:06:06 crc kubenswrapper[4970]: I0930 10:06:06.534534 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv252"] Sep 30 10:06:06 crc kubenswrapper[4970]: W0930 10:06:06.535276 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05036766_e1fb_4fee_b4bc_5ff318fa9793.slice/crio-b38cadba857c81582d87c3e5836992fde7143a95a2a21f19ad87814bceec64d5 WatchSource:0}: Error finding container b38cadba857c81582d87c3e5836992fde7143a95a2a21f19ad87814bceec64d5: Status 404 returned error can't find the container with id b38cadba857c81582d87c3e5836992fde7143a95a2a21f19ad87814bceec64d5 Sep 30 10:06:06 crc kubenswrapper[4970]: I0930 10:06:06.659593 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Sep 30 10:06:06 crc kubenswrapper[4970]: W0930 10:06:06.669347 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dc8250d_7e71_408b_8aa3_947ebe6ef0a1.slice/crio-c7a60a0d7e111e01b949c3a7d0dd75050275ae60faab0205f046c9517c3968ea WatchSource:0}: Error finding container c7a60a0d7e111e01b949c3a7d0dd75050275ae60faab0205f046c9517c3968ea: Status 404 returned error can't find the container with id c7a60a0d7e111e01b949c3a7d0dd75050275ae60faab0205f046c9517c3968ea Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.486270 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb942b20-48f2-45c2-917f-d20a77fd2ada","Type":"ContainerStarted","Data":"720aa6e7631f5dccbf367dee0c014a88e6aeb315d2828c6f16827b2c26d278e8"} Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.486335 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb942b20-48f2-45c2-917f-d20a77fd2ada","Type":"ContainerStarted","Data":"79b4aac52892d5bab75e3c10b191238a6692f41e1c7e6fd8a65eec12b09e55b5"} Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.490329 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1","Type":"ContainerStarted","Data":"c7a60a0d7e111e01b949c3a7d0dd75050275ae60faab0205f046c9517c3968ea"} Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.492088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv252" event={"ID":"05036766-e1fb-4fee-b4bc-5ff318fa9793","Type":"ContainerStarted","Data":"709a65b17e3ac82455041dd6bf3eb091fd4efe7d568274319b67ad3be658cd43"} Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.492111 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv252" event={"ID":"05036766-e1fb-4fee-b4bc-5ff318fa9793","Type":"ContainerStarted","Data":"b38cadba857c81582d87c3e5836992fde7143a95a2a21f19ad87814bceec64d5"} Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.529313 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.529286983 podStartE2EDuration="2.529286983s" podCreationTimestamp="2025-09-30 10:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:06:07.51566711 +0000 UTC m=+1180.587518044" watchObservedRunningTime="2025-09-30 10:06:07.529286983 +0000 UTC m=+1180.601137917" Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.690275 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d" path="/var/lib/kubelet/pods/0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d/volumes" Sep 30 10:06:07 crc kubenswrapper[4970]: I0930 10:06:07.709814 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-sv252" podStartSLOduration=2.709792812 podStartE2EDuration="2.709792812s" podCreationTimestamp="2025-09-30 10:06:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:06:07.548344856 +0000 UTC m=+1180.620195790" watchObservedRunningTime="2025-09-30 10:06:07.709792812 +0000 UTC m=+1180.781643746" Sep 30 10:06:08 crc kubenswrapper[4970]: I0930 10:06:08.509672 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1","Type":"ContainerStarted","Data":"cb4ea4a5941fc560c43ea88d44a3d2f0fec91b6f40af0532277a30c463505755"} Sep 30 10:06:08 crc kubenswrapper[4970]: I0930 10:06:08.853240 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:06:08 crc kubenswrapper[4970]: I0930 10:06:08.933971 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-b2trj"] Sep 30 10:06:08 crc kubenswrapper[4970]: I0930 10:06:08.934227 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="dnsmasq-dns" containerID="cri-o://e4ec6327b128effe3978c3cdd6a3300e2de8fb083ec9b519e7a4902398876f4c" gracePeriod=10 Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.520432 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1","Type":"ContainerStarted","Data":"12eedb31f92e98b1bcbf3b790b7b6a63d9ce4f67c67e78eb4c2f5b7d565367e1"} Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.520795 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1","Type":"ContainerStarted","Data":"3a1bf02736ce131b40772af14d33695ec2f673faae770c890e18dd9a9685ab62"} Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.524332 4970 generic.go:334] "Generic (PLEG): container finished" podID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerID="e4ec6327b128effe3978c3cdd6a3300e2de8fb083ec9b519e7a4902398876f4c" exitCode=0 Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.524361 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" event={"ID":"6fcf5f84-7342-4cf4-864c-041b56ab8dbd","Type":"ContainerDied","Data":"e4ec6327b128effe3978c3cdd6a3300e2de8fb083ec9b519e7a4902398876f4c"} Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.524376 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" event={"ID":"6fcf5f84-7342-4cf4-864c-041b56ab8dbd","Type":"ContainerDied","Data":"c8d97988e9e025cd29f6e64c708a0a6239953d78c18b402741982210b6e05428"} Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.524386 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8d97988e9e025cd29f6e64c708a0a6239953d78c18b402741982210b6e05428" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.533344 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.703903 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-config\") pod \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.704323 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-svc\") pod \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.704495 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-swift-storage-0\") pod \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.704591 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zchk9\" (UniqueName: \"kubernetes.io/projected/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-kube-api-access-zchk9\") pod \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.704737 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-nb\") pod \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.704881 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-sb\") pod \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\" (UID: \"6fcf5f84-7342-4cf4-864c-041b56ab8dbd\") " Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.711186 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-kube-api-access-zchk9" (OuterVolumeSpecName: "kube-api-access-zchk9") pod "6fcf5f84-7342-4cf4-864c-041b56ab8dbd" (UID: "6fcf5f84-7342-4cf4-864c-041b56ab8dbd"). InnerVolumeSpecName "kube-api-access-zchk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.758737 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-config" (OuterVolumeSpecName: "config") pod "6fcf5f84-7342-4cf4-864c-041b56ab8dbd" (UID: "6fcf5f84-7342-4cf4-864c-041b56ab8dbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.764672 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6fcf5f84-7342-4cf4-864c-041b56ab8dbd" (UID: "6fcf5f84-7342-4cf4-864c-041b56ab8dbd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.766790 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6fcf5f84-7342-4cf4-864c-041b56ab8dbd" (UID: "6fcf5f84-7342-4cf4-864c-041b56ab8dbd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.770808 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6fcf5f84-7342-4cf4-864c-041b56ab8dbd" (UID: "6fcf5f84-7342-4cf4-864c-041b56ab8dbd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.772249 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6fcf5f84-7342-4cf4-864c-041b56ab8dbd" (UID: "6fcf5f84-7342-4cf4-864c-041b56ab8dbd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.807794 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.808166 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.808186 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.808204 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.808223 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zchk9\" (UniqueName: \"kubernetes.io/projected/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-kube-api-access-zchk9\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:09 crc kubenswrapper[4970]: I0930 10:06:09.808241 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6fcf5f84-7342-4cf4-864c-041b56ab8dbd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:10 crc kubenswrapper[4970]: I0930 10:06:10.532825 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" Sep 30 10:06:10 crc kubenswrapper[4970]: I0930 10:06:10.563424 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-b2trj"] Sep 30 10:06:10 crc kubenswrapper[4970]: I0930 10:06:10.575138 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-b2trj"] Sep 30 10:06:11 crc kubenswrapper[4970]: I0930 10:06:11.542651 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3dc8250d-7e71-408b-8aa3-947ebe6ef0a1","Type":"ContainerStarted","Data":"f63a3eef2d991d3ae3632ad5b418b4b71ecd9f25cb11795a18f40ecd0ff20c9a"} Sep 30 10:06:11 crc kubenswrapper[4970]: I0930 10:06:11.543220 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Sep 30 10:06:11 crc kubenswrapper[4970]: I0930 10:06:11.544287 4970 generic.go:334] "Generic (PLEG): container finished" podID="05036766-e1fb-4fee-b4bc-5ff318fa9793" containerID="709a65b17e3ac82455041dd6bf3eb091fd4efe7d568274319b67ad3be658cd43" exitCode=0 Sep 30 10:06:11 crc kubenswrapper[4970]: I0930 10:06:11.544318 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv252" event={"ID":"05036766-e1fb-4fee-b4bc-5ff318fa9793","Type":"ContainerDied","Data":"709a65b17e3ac82455041dd6bf3eb091fd4efe7d568274319b67ad3be658cd43"} Sep 30 10:06:11 crc kubenswrapper[4970]: I0930 10:06:11.570297 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.682617998 podStartE2EDuration="6.570275566s" podCreationTimestamp="2025-09-30 10:06:05 +0000 UTC" firstStartedPulling="2025-09-30 10:06:06.674711954 +0000 UTC m=+1179.746562888" lastFinishedPulling="2025-09-30 10:06:10.562369522 +0000 UTC m=+1183.634220456" observedRunningTime="2025-09-30 10:06:11.569827734 +0000 UTC m=+1184.641678728" watchObservedRunningTime="2025-09-30 10:06:11.570275566 +0000 UTC m=+1184.642126510" Sep 30 10:06:11 crc kubenswrapper[4970]: I0930 10:06:11.678526 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" path="/var/lib/kubelet/pods/6fcf5f84-7342-4cf4-864c-041b56ab8dbd/volumes" Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.848333 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.968647 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-combined-ca-bundle\") pod \"05036766-e1fb-4fee-b4bc-5ff318fa9793\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.968794 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-scripts\") pod \"05036766-e1fb-4fee-b4bc-5ff318fa9793\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.968882 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-config-data\") pod \"05036766-e1fb-4fee-b4bc-5ff318fa9793\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.968925 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jtm\" (UniqueName: \"kubernetes.io/projected/05036766-e1fb-4fee-b4bc-5ff318fa9793-kube-api-access-s4jtm\") pod \"05036766-e1fb-4fee-b4bc-5ff318fa9793\" (UID: \"05036766-e1fb-4fee-b4bc-5ff318fa9793\") " Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.976010 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05036766-e1fb-4fee-b4bc-5ff318fa9793-kube-api-access-s4jtm" (OuterVolumeSpecName: "kube-api-access-s4jtm") pod "05036766-e1fb-4fee-b4bc-5ff318fa9793" (UID: "05036766-e1fb-4fee-b4bc-5ff318fa9793"). InnerVolumeSpecName "kube-api-access-s4jtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.977444 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-scripts" (OuterVolumeSpecName: "scripts") pod "05036766-e1fb-4fee-b4bc-5ff318fa9793" (UID: "05036766-e1fb-4fee-b4bc-5ff318fa9793"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:12 crc kubenswrapper[4970]: I0930 10:06:12.995728 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-config-data" (OuterVolumeSpecName: "config-data") pod "05036766-e1fb-4fee-b4bc-5ff318fa9793" (UID: "05036766-e1fb-4fee-b4bc-5ff318fa9793"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.007467 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "05036766-e1fb-4fee-b4bc-5ff318fa9793" (UID: "05036766-e1fb-4fee-b4bc-5ff318fa9793"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.071499 4970 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-scripts\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.071523 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.071553 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jtm\" (UniqueName: \"kubernetes.io/projected/05036766-e1fb-4fee-b4bc-5ff318fa9793-kube-api-access-s4jtm\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.071564 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05036766-e1fb-4fee-b4bc-5ff318fa9793-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.569926 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-sv252" event={"ID":"05036766-e1fb-4fee-b4bc-5ff318fa9793","Type":"ContainerDied","Data":"b38cadba857c81582d87c3e5836992fde7143a95a2a21f19ad87814bceec64d5"} Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.570639 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b38cadba857c81582d87c3e5836992fde7143a95a2a21f19ad87814bceec64d5" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.570116 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-sv252" Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.805031 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.805396 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-log" containerID="cri-o://79b4aac52892d5bab75e3c10b191238a6692f41e1c7e6fd8a65eec12b09e55b5" gracePeriod=30 Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.805610 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-api" containerID="cri-o://720aa6e7631f5dccbf367dee0c014a88e6aeb315d2828c6f16827b2c26d278e8" gracePeriod=30 Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.830653 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.830966 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="da69c255-8862-483f-befc-490c59e22b8d" containerName="nova-scheduler-scheduler" containerID="cri-o://c0abc0449e9b19d1d393344e411e0d079dcd58bfb849bc523aac496d6f2e885d" gracePeriod=30 Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.853220 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.853505 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-log" containerID="cri-o://f865d55dc10dfaff22d7485e134dd7508a1068ca089bef51ec8a538b2f16c6ca" gracePeriod=30 Sep 30 10:06:13 crc kubenswrapper[4970]: I0930 10:06:13.853617 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-metadata" containerID="cri-o://38977e0598005485269cbaa549e133c231dcf4c3595ca46cd8fdca7199062970" gracePeriod=30 Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.400326 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-865f5d856f-b2trj" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.194:5353: i/o timeout" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.580817 4970 generic.go:334] "Generic (PLEG): container finished" podID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerID="720aa6e7631f5dccbf367dee0c014a88e6aeb315d2828c6f16827b2c26d278e8" exitCode=0 Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.581211 4970 generic.go:334] "Generic (PLEG): container finished" podID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerID="79b4aac52892d5bab75e3c10b191238a6692f41e1c7e6fd8a65eec12b09e55b5" exitCode=143 Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.580896 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb942b20-48f2-45c2-917f-d20a77fd2ada","Type":"ContainerDied","Data":"720aa6e7631f5dccbf367dee0c014a88e6aeb315d2828c6f16827b2c26d278e8"} Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.581289 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb942b20-48f2-45c2-917f-d20a77fd2ada","Type":"ContainerDied","Data":"79b4aac52892d5bab75e3c10b191238a6692f41e1c7e6fd8a65eec12b09e55b5"} Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.581305 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb942b20-48f2-45c2-917f-d20a77fd2ada","Type":"ContainerDied","Data":"5ffdbd929dbec9440867254b4f235c955b0dfd539f3b4feebcc46734e911728f"} Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.581379 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ffdbd929dbec9440867254b4f235c955b0dfd539f3b4feebcc46734e911728f" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.583218 4970 generic.go:334] "Generic (PLEG): container finished" podID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerID="f865d55dc10dfaff22d7485e134dd7508a1068ca089bef51ec8a538b2f16c6ca" exitCode=143 Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.583244 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41f183c-fb01-431c-aa98-75bb7b7bb669","Type":"ContainerDied","Data":"f865d55dc10dfaff22d7485e134dd7508a1068ca089bef51ec8a538b2f16c6ca"} Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.589672 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711115 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94sw9\" (UniqueName: \"kubernetes.io/projected/eb942b20-48f2-45c2-917f-d20a77fd2ada-kube-api-access-94sw9\") pod \"eb942b20-48f2-45c2-917f-d20a77fd2ada\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711192 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-config-data\") pod \"eb942b20-48f2-45c2-917f-d20a77fd2ada\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711243 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-combined-ca-bundle\") pod \"eb942b20-48f2-45c2-917f-d20a77fd2ada\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711292 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-public-tls-certs\") pod \"eb942b20-48f2-45c2-917f-d20a77fd2ada\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711378 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb942b20-48f2-45c2-917f-d20a77fd2ada-logs\") pod \"eb942b20-48f2-45c2-917f-d20a77fd2ada\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711395 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-internal-tls-certs\") pod \"eb942b20-48f2-45c2-917f-d20a77fd2ada\" (UID: \"eb942b20-48f2-45c2-917f-d20a77fd2ada\") " Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.711639 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb942b20-48f2-45c2-917f-d20a77fd2ada-logs" (OuterVolumeSpecName: "logs") pod "eb942b20-48f2-45c2-917f-d20a77fd2ada" (UID: "eb942b20-48f2-45c2-917f-d20a77fd2ada"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.712313 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb942b20-48f2-45c2-917f-d20a77fd2ada-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.716391 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb942b20-48f2-45c2-917f-d20a77fd2ada-kube-api-access-94sw9" (OuterVolumeSpecName: "kube-api-access-94sw9") pod "eb942b20-48f2-45c2-917f-d20a77fd2ada" (UID: "eb942b20-48f2-45c2-917f-d20a77fd2ada"). InnerVolumeSpecName "kube-api-access-94sw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.739054 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb942b20-48f2-45c2-917f-d20a77fd2ada" (UID: "eb942b20-48f2-45c2-917f-d20a77fd2ada"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.750612 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-config-data" (OuterVolumeSpecName: "config-data") pod "eb942b20-48f2-45c2-917f-d20a77fd2ada" (UID: "eb942b20-48f2-45c2-917f-d20a77fd2ada"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.764027 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "eb942b20-48f2-45c2-917f-d20a77fd2ada" (UID: "eb942b20-48f2-45c2-917f-d20a77fd2ada"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.785400 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb942b20-48f2-45c2-917f-d20a77fd2ada" (UID: "eb942b20-48f2-45c2-917f-d20a77fd2ada"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.815109 4970 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.815247 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94sw9\" (UniqueName: \"kubernetes.io/projected/eb942b20-48f2-45c2-917f-d20a77fd2ada-kube-api-access-94sw9\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.815314 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.815327 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:14 crc kubenswrapper[4970]: I0930 10:06:14.815339 4970 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb942b20-48f2-45c2-917f-d20a77fd2ada-public-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.595003 4970 generic.go:334] "Generic (PLEG): container finished" podID="da69c255-8862-483f-befc-490c59e22b8d" containerID="c0abc0449e9b19d1d393344e411e0d079dcd58bfb849bc523aac496d6f2e885d" exitCode=0 Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.595136 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da69c255-8862-483f-befc-490c59e22b8d","Type":"ContainerDied","Data":"c0abc0449e9b19d1d393344e411e0d079dcd58bfb849bc523aac496d6f2e885d"} Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.595367 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"da69c255-8862-483f-befc-490c59e22b8d","Type":"ContainerDied","Data":"d2722dec05ed33a95349240131e30ed7c52aed1792cdd2658eb380e935b68a4e"} Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.595389 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2722dec05ed33a95349240131e30ed7c52aed1792cdd2658eb380e935b68a4e" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.595376 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.625520 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.643366 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.652157 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.694722 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" path="/var/lib/kubelet/pods/eb942b20-48f2-45c2-917f-d20a77fd2ada/volumes" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.709585 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:15 crc kubenswrapper[4970]: E0930 10:06:15.710028 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05036766-e1fb-4fee-b4bc-5ff318fa9793" containerName="nova-manage" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710045 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="05036766-e1fb-4fee-b4bc-5ff318fa9793" containerName="nova-manage" Sep 30 10:06:15 crc kubenswrapper[4970]: E0930 10:06:15.710060 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="init" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710067 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="init" Sep 30 10:06:15 crc kubenswrapper[4970]: E0930 10:06:15.710083 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="dnsmasq-dns" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710091 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="dnsmasq-dns" Sep 30 10:06:15 crc kubenswrapper[4970]: E0930 10:06:15.710106 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-api" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710111 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-api" Sep 30 10:06:15 crc kubenswrapper[4970]: E0930 10:06:15.710128 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da69c255-8862-483f-befc-490c59e22b8d" containerName="nova-scheduler-scheduler" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710134 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="da69c255-8862-483f-befc-490c59e22b8d" containerName="nova-scheduler-scheduler" Sep 30 10:06:15 crc kubenswrapper[4970]: E0930 10:06:15.710144 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-log" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710149 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-log" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710315 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-api" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710331 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="05036766-e1fb-4fee-b4bc-5ff318fa9793" containerName="nova-manage" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710342 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb942b20-48f2-45c2-917f-d20a77fd2ada" containerName="nova-api-log" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710356 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcf5f84-7342-4cf4-864c-041b56ab8dbd" containerName="dnsmasq-dns" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.710370 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="da69c255-8862-483f-befc-490c59e22b8d" containerName="nova-scheduler-scheduler" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.711312 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.713879 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.714265 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.714484 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.718292 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.740364 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-combined-ca-bundle\") pod \"da69c255-8862-483f-befc-490c59e22b8d\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.740610 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-config-data\") pod \"da69c255-8862-483f-befc-490c59e22b8d\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.740674 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbnnp\" (UniqueName: \"kubernetes.io/projected/da69c255-8862-483f-befc-490c59e22b8d-kube-api-access-qbnnp\") pod \"da69c255-8862-483f-befc-490c59e22b8d\" (UID: \"da69c255-8862-483f-befc-490c59e22b8d\") " Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.755229 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da69c255-8862-483f-befc-490c59e22b8d-kube-api-access-qbnnp" (OuterVolumeSpecName: "kube-api-access-qbnnp") pod "da69c255-8862-483f-befc-490c59e22b8d" (UID: "da69c255-8862-483f-befc-490c59e22b8d"). InnerVolumeSpecName "kube-api-access-qbnnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.767843 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-config-data" (OuterVolumeSpecName: "config-data") pod "da69c255-8862-483f-befc-490c59e22b8d" (UID: "da69c255-8862-483f-befc-490c59e22b8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.775142 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da69c255-8862-483f-befc-490c59e22b8d" (UID: "da69c255-8862-483f-befc-490c59e22b8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843226 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-config-data\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843391 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-logs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843473 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843546 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brscv\" (UniqueName: \"kubernetes.io/projected/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-kube-api-access-brscv\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843701 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-public-tls-certs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843784 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.843962 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbnnp\" (UniqueName: \"kubernetes.io/projected/da69c255-8862-483f-befc-490c59e22b8d-kube-api-access-qbnnp\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.844007 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.844024 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da69c255-8862-483f-befc-490c59e22b8d-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.945949 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-logs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.946023 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.946067 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brscv\" (UniqueName: \"kubernetes.io/projected/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-kube-api-access-brscv\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.946134 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-public-tls-certs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.946573 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-logs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.946875 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.946961 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-config-data\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.950299 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.950322 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-config-data\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.950906 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-public-tls-certs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.951171 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:15 crc kubenswrapper[4970]: I0930 10:06:15.967391 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brscv\" (UniqueName: \"kubernetes.io/projected/b1eaeb05-1ae9-4640-bc87-da6567c4f1a1-kube-api-access-brscv\") pod \"nova-api-0\" (UID: \"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1\") " pod="openstack/nova-api-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.035931 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.485826 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.605765 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1","Type":"ContainerStarted","Data":"999c0fdf36cb3c40eb09e0f4d9c759e358cf92087c351b9a5f259719347961e3"} Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.605800 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.648795 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.655402 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.707200 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.709119 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.719067 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.719489 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.867805 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c599b731-6bc5-4882-9f48-0abfa125f843-config-data\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.867866 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8f8g\" (UniqueName: \"kubernetes.io/projected/c599b731-6bc5-4882-9f48-0abfa125f843-kube-api-access-s8f8g\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.867916 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c599b731-6bc5-4882-9f48-0abfa125f843-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.969272 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c599b731-6bc5-4882-9f48-0abfa125f843-config-data\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.969322 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8f8g\" (UniqueName: \"kubernetes.io/projected/c599b731-6bc5-4882-9f48-0abfa125f843-kube-api-access-s8f8g\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.969374 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c599b731-6bc5-4882-9f48-0abfa125f843-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.978130 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c599b731-6bc5-4882-9f48-0abfa125f843-config-data\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.986807 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c599b731-6bc5-4882-9f48-0abfa125f843-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:16 crc kubenswrapper[4970]: I0930 10:06:16.991917 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8f8g\" (UniqueName: \"kubernetes.io/projected/c599b731-6bc5-4882-9f48-0abfa125f843-kube-api-access-s8f8g\") pod \"nova-scheduler-0\" (UID: \"c599b731-6bc5-4882-9f48-0abfa125f843\") " pod="openstack/nova-scheduler-0" Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.009688 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:47496->10.217.0.197:8775: read: connection reset by peer" Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.009690 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.197:8775/\": read tcp 10.217.0.2:47498->10.217.0.197:8775: read: connection reset by peer" Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.031682 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.620082 4970 generic.go:334] "Generic (PLEG): container finished" podID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerID="38977e0598005485269cbaa549e133c231dcf4c3595ca46cd8fdca7199062970" exitCode=0 Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.620451 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41f183c-fb01-431c-aa98-75bb7b7bb669","Type":"ContainerDied","Data":"38977e0598005485269cbaa549e133c231dcf4c3595ca46cd8fdca7199062970"} Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.622716 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1","Type":"ContainerStarted","Data":"2604be9c2234886f7f70f8957b6c857193ae58a0ceef9c0621f8a0fa71e89fd6"} Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.622740 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b1eaeb05-1ae9-4640-bc87-da6567c4f1a1","Type":"ContainerStarted","Data":"1196e5b6753eac59914dda560dc5dd6cdc01a5f52ba464716fc65278e44ccb2a"} Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.664968 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.663743251 podStartE2EDuration="2.663743251s" podCreationTimestamp="2025-09-30 10:06:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:06:17.651185837 +0000 UTC m=+1190.723036771" watchObservedRunningTime="2025-09-30 10:06:17.663743251 +0000 UTC m=+1190.735594235" Sep 30 10:06:17 crc kubenswrapper[4970]: I0930 10:06:17.682553 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da69c255-8862-483f-befc-490c59e22b8d" path="/var/lib/kubelet/pods/da69c255-8862-483f-befc-490c59e22b8d/volumes" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.171778 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.270370 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Sep 30 10:06:18 crc kubenswrapper[4970]: W0930 10:06:18.274208 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc599b731_6bc5_4882_9f48_0abfa125f843.slice/crio-5697166cbde655c54ba5fe8fa40c77dbf14ffa62dcf6062d598757565740b309 WatchSource:0}: Error finding container 5697166cbde655c54ba5fe8fa40c77dbf14ffa62dcf6062d598757565740b309: Status 404 returned error can't find the container with id 5697166cbde655c54ba5fe8fa40c77dbf14ffa62dcf6062d598757565740b309 Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.297817 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41f183c-fb01-431c-aa98-75bb7b7bb669-logs\") pod \"b41f183c-fb01-431c-aa98-75bb7b7bb669\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.298061 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-combined-ca-bundle\") pod \"b41f183c-fb01-431c-aa98-75bb7b7bb669\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.298130 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7p4h\" (UniqueName: \"kubernetes.io/projected/b41f183c-fb01-431c-aa98-75bb7b7bb669-kube-api-access-f7p4h\") pod \"b41f183c-fb01-431c-aa98-75bb7b7bb669\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.298267 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-nova-metadata-tls-certs\") pod \"b41f183c-fb01-431c-aa98-75bb7b7bb669\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.298364 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-config-data\") pod \"b41f183c-fb01-431c-aa98-75bb7b7bb669\" (UID: \"b41f183c-fb01-431c-aa98-75bb7b7bb669\") " Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.298963 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b41f183c-fb01-431c-aa98-75bb7b7bb669-logs" (OuterVolumeSpecName: "logs") pod "b41f183c-fb01-431c-aa98-75bb7b7bb669" (UID: "b41f183c-fb01-431c-aa98-75bb7b7bb669"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.299373 4970 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b41f183c-fb01-431c-aa98-75bb7b7bb669-logs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.303135 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41f183c-fb01-431c-aa98-75bb7b7bb669-kube-api-access-f7p4h" (OuterVolumeSpecName: "kube-api-access-f7p4h") pod "b41f183c-fb01-431c-aa98-75bb7b7bb669" (UID: "b41f183c-fb01-431c-aa98-75bb7b7bb669"). InnerVolumeSpecName "kube-api-access-f7p4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.325576 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-config-data" (OuterVolumeSpecName: "config-data") pod "b41f183c-fb01-431c-aa98-75bb7b7bb669" (UID: "b41f183c-fb01-431c-aa98-75bb7b7bb669"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.352737 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b41f183c-fb01-431c-aa98-75bb7b7bb669" (UID: "b41f183c-fb01-431c-aa98-75bb7b7bb669"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.381508 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b41f183c-fb01-431c-aa98-75bb7b7bb669" (UID: "b41f183c-fb01-431c-aa98-75bb7b7bb669"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.400550 4970 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.400580 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7p4h\" (UniqueName: \"kubernetes.io/projected/b41f183c-fb01-431c-aa98-75bb7b7bb669-kube-api-access-f7p4h\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.400590 4970 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.400599 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b41f183c-fb01-431c-aa98-75bb7b7bb669-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.640152 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c599b731-6bc5-4882-9f48-0abfa125f843","Type":"ContainerStarted","Data":"415fe1843d87e2955b85a7268ee357a4f3cd583bc528688c3f0f973153041e9c"} Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.640232 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c599b731-6bc5-4882-9f48-0abfa125f843","Type":"ContainerStarted","Data":"5697166cbde655c54ba5fe8fa40c77dbf14ffa62dcf6062d598757565740b309"} Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.644424 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b41f183c-fb01-431c-aa98-75bb7b7bb669","Type":"ContainerDied","Data":"5a27be673c3d5a8f7401501259fc7fbd993fc0fbe5bce9cf1dc1d66a57dda732"} Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.644560 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.644575 4970 scope.go:117] "RemoveContainer" containerID="38977e0598005485269cbaa549e133c231dcf4c3595ca46cd8fdca7199062970" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.664021 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.663979615 podStartE2EDuration="2.663979615s" podCreationTimestamp="2025-09-30 10:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:06:18.659349578 +0000 UTC m=+1191.731200512" watchObservedRunningTime="2025-09-30 10:06:18.663979615 +0000 UTC m=+1191.735830559" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.697019 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.701776 4970 scope.go:117] "RemoveContainer" containerID="f865d55dc10dfaff22d7485e134dd7508a1068ca089bef51ec8a538b2f16c6ca" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.719410 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.727786 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:06:18 crc kubenswrapper[4970]: E0930 10:06:18.728262 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-metadata" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.728299 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-metadata" Sep 30 10:06:18 crc kubenswrapper[4970]: E0930 10:06:18.728355 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-log" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.728364 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-log" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.728579 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-metadata" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.728609 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" containerName="nova-metadata-log" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.729764 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.733043 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.733218 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.740132 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.814414 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l9cr\" (UniqueName: \"kubernetes.io/projected/fec6d022-057f-4f80-9da1-25c1f4e1544e-kube-api-access-6l9cr\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.814548 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-config-data\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.814567 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.814611 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fec6d022-057f-4f80-9da1-25c1f4e1544e-logs\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.814625 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.916508 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-config-data\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.918008 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.918147 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fec6d022-057f-4f80-9da1-25c1f4e1544e-logs\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.918242 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.918523 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fec6d022-057f-4f80-9da1-25c1f4e1544e-logs\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.918528 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l9cr\" (UniqueName: \"kubernetes.io/projected/fec6d022-057f-4f80-9da1-25c1f4e1544e-kube-api-access-6l9cr\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.920705 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-config-data\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.921527 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.922162 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fec6d022-057f-4f80-9da1-25c1f4e1544e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:18 crc kubenswrapper[4970]: I0930 10:06:18.935972 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l9cr\" (UniqueName: \"kubernetes.io/projected/fec6d022-057f-4f80-9da1-25c1f4e1544e-kube-api-access-6l9cr\") pod \"nova-metadata-0\" (UID: \"fec6d022-057f-4f80-9da1-25c1f4e1544e\") " pod="openstack/nova-metadata-0" Sep 30 10:06:19 crc kubenswrapper[4970]: I0930 10:06:19.059162 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Sep 30 10:06:19 crc kubenswrapper[4970]: I0930 10:06:19.527415 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Sep 30 10:06:19 crc kubenswrapper[4970]: I0930 10:06:19.663620 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fec6d022-057f-4f80-9da1-25c1f4e1544e","Type":"ContainerStarted","Data":"f228619b394cd552df0664d6b9ee58b95f541773fc897e2f70cfb48889ecc333"} Sep 30 10:06:19 crc kubenswrapper[4970]: I0930 10:06:19.686103 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41f183c-fb01-431c-aa98-75bb7b7bb669" path="/var/lib/kubelet/pods/b41f183c-fb01-431c-aa98-75bb7b7bb669/volumes" Sep 30 10:06:20 crc kubenswrapper[4970]: I0930 10:06:20.678727 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fec6d022-057f-4f80-9da1-25c1f4e1544e","Type":"ContainerStarted","Data":"16249d0bc6069095a1378d89219994056b6977addbeee0d75276b98dc36302fa"} Sep 30 10:06:20 crc kubenswrapper[4970]: I0930 10:06:20.679226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fec6d022-057f-4f80-9da1-25c1f4e1544e","Type":"ContainerStarted","Data":"6677d16ee50e4829fdedf498040a82d48bf8c943e930d6109d8976688487cb37"} Sep 30 10:06:20 crc kubenswrapper[4970]: I0930 10:06:20.713438 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.713406565 podStartE2EDuration="2.713406565s" podCreationTimestamp="2025-09-30 10:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:06:20.70884735 +0000 UTC m=+1193.780698304" watchObservedRunningTime="2025-09-30 10:06:20.713406565 +0000 UTC m=+1193.785257509" Sep 30 10:06:22 crc kubenswrapper[4970]: I0930 10:06:22.032367 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Sep 30 10:06:24 crc kubenswrapper[4970]: I0930 10:06:24.059894 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 10:06:24 crc kubenswrapper[4970]: I0930 10:06:24.060393 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Sep 30 10:06:26 crc kubenswrapper[4970]: I0930 10:06:26.037166 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 10:06:26 crc kubenswrapper[4970]: I0930 10:06:26.037227 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Sep 30 10:06:27 crc kubenswrapper[4970]: I0930 10:06:27.032337 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Sep 30 10:06:27 crc kubenswrapper[4970]: I0930 10:06:27.052345 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b1eaeb05-1ae9-4640-bc87-da6567c4f1a1" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 10:06:27 crc kubenswrapper[4970]: I0930 10:06:27.052407 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b1eaeb05-1ae9-4640-bc87-da6567c4f1a1" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 10:06:27 crc kubenswrapper[4970]: I0930 10:06:27.071962 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Sep 30 10:06:27 crc kubenswrapper[4970]: I0930 10:06:27.799378 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Sep 30 10:06:29 crc kubenswrapper[4970]: I0930 10:06:29.060395 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 10:06:29 crc kubenswrapper[4970]: I0930 10:06:29.060870 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Sep 30 10:06:30 crc kubenswrapper[4970]: I0930 10:06:30.082288 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fec6d022-057f-4f80-9da1-25c1f4e1544e" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 10:06:30 crc kubenswrapper[4970]: I0930 10:06:30.082413 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fec6d022-057f-4f80-9da1-25c1f4e1544e" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 30 10:06:34 crc kubenswrapper[4970]: I0930 10:06:34.820878 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:06:34 crc kubenswrapper[4970]: I0930 10:06:34.821438 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:06:35 crc kubenswrapper[4970]: I0930 10:06:35.763266 4970 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod0b2d84ee-9f7a-45f7-8ddd-a50d05a0507d] : Timed out while waiting for systemd to remove kubepods-besteffort-pod0b2d84ee_9f7a_45f7_8ddd_a50d05a0507d.slice" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.045084 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.046479 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.048297 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.053718 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.169255 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.845023 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Sep 30 10:06:36 crc kubenswrapper[4970]: I0930 10:06:36.850866 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Sep 30 10:06:39 crc kubenswrapper[4970]: I0930 10:06:39.078948 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 10:06:39 crc kubenswrapper[4970]: I0930 10:06:39.079546 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Sep 30 10:06:39 crc kubenswrapper[4970]: I0930 10:06:39.086783 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 10:06:39 crc kubenswrapper[4970]: I0930 10:06:39.087084 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Sep 30 10:06:47 crc kubenswrapper[4970]: I0930 10:06:47.152900 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:06:48 crc kubenswrapper[4970]: I0930 10:06:48.179667 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:06:51 crc kubenswrapper[4970]: I0930 10:06:51.111867 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="rabbitmq" containerID="cri-o://212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95" gracePeriod=604797 Sep 30 10:06:52 crc kubenswrapper[4970]: I0930 10:06:52.856605 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="rabbitmq" containerID="cri-o://e607da78e75bc80d704d1f6cde7909bdbc75e630bb1b648b81a03c52611b95a5" gracePeriod=604796 Sep 30 10:06:56 crc kubenswrapper[4970]: I0930 10:06:56.492818 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Sep 30 10:06:56 crc kubenswrapper[4970]: I0930 10:06:56.807960 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.103:5671: connect: connection refused" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.717368 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.765866 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-server-conf\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766030 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kpp5\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-kube-api-access-4kpp5\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766122 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-config-data\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766251 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-confd\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766338 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-plugins-conf\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766381 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7bc5f72b-8b51-4a55-971a-83135118e627-pod-info\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766466 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-plugins\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766500 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-tls\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766603 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766640 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7bc5f72b-8b51-4a55-971a-83135118e627-erlang-cookie-secret\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.766665 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-erlang-cookie\") pod \"7bc5f72b-8b51-4a55-971a-83135118e627\" (UID: \"7bc5f72b-8b51-4a55-971a-83135118e627\") " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.768067 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.768589 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.769199 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.776042 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/7bc5f72b-8b51-4a55-971a-83135118e627-pod-info" (OuterVolumeSpecName: "pod-info") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.777219 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.777236 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bc5f72b-8b51-4a55-971a-83135118e627-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.777509 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-kube-api-access-4kpp5" (OuterVolumeSpecName: "kube-api-access-4kpp5") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "kube-api-access-4kpp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.779617 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.854447 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-config-data" (OuterVolumeSpecName: "config-data") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868706 4970 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-plugins-conf\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868734 4970 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7bc5f72b-8b51-4a55-971a-83135118e627-pod-info\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868742 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868750 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868772 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868780 4970 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7bc5f72b-8b51-4a55-971a-83135118e627-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868791 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868801 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kpp5\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-kube-api-access-4kpp5\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.868809 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.891757 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-server-conf" (OuterVolumeSpecName: "server-conf") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.896128 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.921909 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "7bc5f72b-8b51-4a55-971a-83135118e627" (UID: "7bc5f72b-8b51-4a55-971a-83135118e627"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.970594 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.970628 4970 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7bc5f72b-8b51-4a55-971a-83135118e627-server-conf\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:57 crc kubenswrapper[4970]: I0930 10:06:57.970655 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7bc5f72b-8b51-4a55-971a-83135118e627-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.087109 4970 generic.go:334] "Generic (PLEG): container finished" podID="7bc5f72b-8b51-4a55-971a-83135118e627" containerID="212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95" exitCode=0 Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.087164 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7bc5f72b-8b51-4a55-971a-83135118e627","Type":"ContainerDied","Data":"212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95"} Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.087190 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7bc5f72b-8b51-4a55-971a-83135118e627","Type":"ContainerDied","Data":"7bf9ea78e61f7734f2406e7fcc3f0a90258cc8afb7d343365cca20bf7e4b5fa8"} Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.087206 4970 scope.go:117] "RemoveContainer" containerID="212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.087317 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.154082 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.159788 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.169098 4970 scope.go:117] "RemoveContainer" containerID="bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.177524 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:06:58 crc kubenswrapper[4970]: E0930 10:06:58.177949 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="rabbitmq" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.177969 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="rabbitmq" Sep 30 10:06:58 crc kubenswrapper[4970]: E0930 10:06:58.182043 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="setup-container" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.182063 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="setup-container" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.182260 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" containerName="rabbitmq" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.183201 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.187297 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.187468 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.188073 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-rgpx8" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.188366 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.191153 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.195192 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.195359 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.206283 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.228655 4970 scope.go:117] "RemoveContainer" containerID="212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95" Sep 30 10:06:58 crc kubenswrapper[4970]: E0930 10:06:58.230554 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95\": container with ID starting with 212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95 not found: ID does not exist" containerID="212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.230592 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95"} err="failed to get container status \"212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95\": rpc error: code = NotFound desc = could not find container \"212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95\": container with ID starting with 212d18ce42d0d5523ae0594d460b7e3923e201c99f0d87fcaf1daca650a88d95 not found: ID does not exist" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.230616 4970 scope.go:117] "RemoveContainer" containerID="bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194" Sep 30 10:06:58 crc kubenswrapper[4970]: E0930 10:06:58.231083 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194\": container with ID starting with bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194 not found: ID does not exist" containerID="bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.231110 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194"} err="failed to get container status \"bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194\": rpc error: code = NotFound desc = could not find container \"bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194\": container with ID starting with bd3bcbead8ffdfabbe6793eb7505c5bbcae238ae1df09b02911c00024d615194 not found: ID does not exist" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274617 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274676 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274694 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274721 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274737 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4582e248-17bf-40bb-9072-f64d72d1fc82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274777 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwgk6\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-kube-api-access-fwgk6\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274812 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-config-data\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274840 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274860 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4582e248-17bf-40bb-9072-f64d72d1fc82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274875 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.274889 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376234 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376306 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376329 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376365 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376391 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4582e248-17bf-40bb-9072-f64d72d1fc82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376436 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwgk6\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-kube-api-access-fwgk6\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376476 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-config-data\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376505 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376529 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4582e248-17bf-40bb-9072-f64d72d1fc82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376547 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376563 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.376905 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.377089 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.377093 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.377676 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-config-data\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.378045 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.378090 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4582e248-17bf-40bb-9072-f64d72d1fc82-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.382782 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4582e248-17bf-40bb-9072-f64d72d1fc82-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.382782 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4582e248-17bf-40bb-9072-f64d72d1fc82-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.383973 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.386621 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.396427 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwgk6\" (UniqueName: \"kubernetes.io/projected/4582e248-17bf-40bb-9072-f64d72d1fc82-kube-api-access-fwgk6\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.438226 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"4582e248-17bf-40bb-9072-f64d72d1fc82\") " pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.511317 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Sep 30 10:06:58 crc kubenswrapper[4970]: I0930 10:06:58.991716 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Sep 30 10:06:59 crc kubenswrapper[4970]: W0930 10:06:59.002413 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4582e248_17bf_40bb_9072_f64d72d1fc82.slice/crio-d5e0505300a87da4caec066cfffe0a4f307c6560605dd46ccec31bb39d8874ca WatchSource:0}: Error finding container d5e0505300a87da4caec066cfffe0a4f307c6560605dd46ccec31bb39d8874ca: Status 404 returned error can't find the container with id d5e0505300a87da4caec066cfffe0a4f307c6560605dd46ccec31bb39d8874ca Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.101904 4970 generic.go:334] "Generic (PLEG): container finished" podID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerID="e607da78e75bc80d704d1f6cde7909bdbc75e630bb1b648b81a03c52611b95a5" exitCode=0 Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.101971 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0b2bd53-a874-4d92-b137-24f772f5d61c","Type":"ContainerDied","Data":"e607da78e75bc80d704d1f6cde7909bdbc75e630bb1b648b81a03c52611b95a5"} Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.104675 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4582e248-17bf-40bb-9072-f64d72d1fc82","Type":"ContainerStarted","Data":"d5e0505300a87da4caec066cfffe0a4f307c6560605dd46ccec31bb39d8874ca"} Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.479080 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.605361 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-config-data\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.605471 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-confd\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.605515 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-plugins-conf\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.605597 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-226zq\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-kube-api-access-226zq\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.605638 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0b2bd53-a874-4d92-b137-24f772f5d61c-pod-info\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.605693 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-tls\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.606071 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.606115 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-plugins\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.606191 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0b2bd53-a874-4d92-b137-24f772f5d61c-erlang-cookie-secret\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.606289 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-server-conf\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.606342 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-erlang-cookie\") pod \"e0b2bd53-a874-4d92-b137-24f772f5d61c\" (UID: \"e0b2bd53-a874-4d92-b137-24f772f5d61c\") " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.608398 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.617645 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.630593 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e0b2bd53-a874-4d92-b137-24f772f5d61c-pod-info" (OuterVolumeSpecName: "pod-info") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.631247 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.631480 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.648696 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b2bd53-a874-4d92-b137-24f772f5d61c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.648830 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-kube-api-access-226zq" (OuterVolumeSpecName: "kube-api-access-226zq") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "kube-api-access-226zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.656341 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-config-data" (OuterVolumeSpecName: "config-data") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.647499 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.698894 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bc5f72b-8b51-4a55-971a-83135118e627" path="/var/lib/kubelet/pods/7bc5f72b-8b51-4a55-971a-83135118e627/volumes" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.704607 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-server-conf" (OuterVolumeSpecName: "server-conf") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.709753 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.709789 4970 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-plugins-conf\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.709806 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-226zq\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-kube-api-access-226zq\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712628 4970 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e0b2bd53-a874-4d92-b137-24f772f5d61c-pod-info\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712658 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712698 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712713 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712727 4970 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e0b2bd53-a874-4d92-b137-24f772f5d61c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712739 4970 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e0b2bd53-a874-4d92-b137-24f772f5d61c-server-conf\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.712752 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.750536 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.761518 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e0b2bd53-a874-4d92-b137-24f772f5d61c" (UID: "e0b2bd53-a874-4d92-b137-24f772f5d61c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.818047 4970 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e0b2bd53-a874-4d92-b137-24f772f5d61c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Sep 30 10:06:59 crc kubenswrapper[4970]: I0930 10:06:59.818118 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.118600 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e0b2bd53-a874-4d92-b137-24f772f5d61c","Type":"ContainerDied","Data":"c37922e62f634d228d88b700a70cc73dc906bb8c45949613e14146318a56387c"} Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.119809 4970 scope.go:117] "RemoveContainer" containerID="e607da78e75bc80d704d1f6cde7909bdbc75e630bb1b648b81a03c52611b95a5" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.118755 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.120652 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4582e248-17bf-40bb-9072-f64d72d1fc82","Type":"ContainerStarted","Data":"b62feecd1b67740be8444e99ac86bcee822905917b5ab4daaeeafdc3d4239d79"} Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.176780 4970 scope.go:117] "RemoveContainer" containerID="0fa72e8e7de3abae746e76bd2c494053bb66770c7d210235abd107e7a49c3403" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.187055 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.202220 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.222737 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:07:00 crc kubenswrapper[4970]: E0930 10:07:00.223416 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="rabbitmq" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.223441 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="rabbitmq" Sep 30 10:07:00 crc kubenswrapper[4970]: E0930 10:07:00.223460 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="setup-container" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.223468 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="setup-container" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.223704 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" containerName="rabbitmq" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.227763 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.230577 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.230756 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.230848 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.231086 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.231284 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.231547 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.231840 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-8mc6k" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.237600 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327411 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327471 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327499 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncgnd\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-kube-api-access-ncgnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327669 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7690d385-7ca5-472c-8ee7-5c3aa4030951-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327735 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327793 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7690d385-7ca5-472c-8ee7-5c3aa4030951-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.327868 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.328067 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.328113 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.328278 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.328306 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430306 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7690d385-7ca5-472c-8ee7-5c3aa4030951-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430445 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430475 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430542 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430565 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430608 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430643 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430676 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncgnd\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-kube-api-access-ncgnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430720 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7690d385-7ca5-472c-8ee7-5c3aa4030951-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430750 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.430965 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.431233 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.431441 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.432038 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.432483 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.432596 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7690d385-7ca5-472c-8ee7-5c3aa4030951-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.436542 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7690d385-7ca5-472c-8ee7-5c3aa4030951-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.436568 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.439568 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.448648 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7690d385-7ca5-472c-8ee7-5c3aa4030951-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.451649 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncgnd\" (UniqueName: \"kubernetes.io/projected/7690d385-7ca5-472c-8ee7-5c3aa4030951-kube-api-access-ncgnd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.464752 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"7690d385-7ca5-472c-8ee7-5c3aa4030951\") " pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.555110 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.669859 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vw9dr"] Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.677449 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.681498 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.691038 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vw9dr"] Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737149 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737251 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-svc\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737365 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737463 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szwcs\" (UniqueName: \"kubernetes.io/projected/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-kube-api-access-szwcs\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737489 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737508 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-config\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.737543 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839364 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szwcs\" (UniqueName: \"kubernetes.io/projected/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-kube-api-access-szwcs\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839442 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839462 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-config\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839515 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839546 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839609 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-svc\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.839757 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.841458 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-openstack-edpm-ipam\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.842005 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-nb\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.842191 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-config\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.842215 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-svc\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.842279 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-swift-storage-0\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.849327 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-sb\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.858565 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szwcs\" (UniqueName: \"kubernetes.io/projected/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-kube-api-access-szwcs\") pod \"dnsmasq-dns-5576978c7c-vw9dr\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:00 crc kubenswrapper[4970]: I0930 10:07:00.998667 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:01 crc kubenswrapper[4970]: I0930 10:07:01.149213 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Sep 30 10:07:01 crc kubenswrapper[4970]: I0930 10:07:01.471872 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vw9dr"] Sep 30 10:07:01 crc kubenswrapper[4970]: W0930 10:07:01.474288 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4cf9b39_abf8_4c7c_a396_af60670fd4ed.slice/crio-45a84bc14481de70fe18b669c8f6ae905198dd7eeb558d6f2a799a381b6ad7e7 WatchSource:0}: Error finding container 45a84bc14481de70fe18b669c8f6ae905198dd7eeb558d6f2a799a381b6ad7e7: Status 404 returned error can't find the container with id 45a84bc14481de70fe18b669c8f6ae905198dd7eeb558d6f2a799a381b6ad7e7 Sep 30 10:07:01 crc kubenswrapper[4970]: I0930 10:07:01.680114 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b2bd53-a874-4d92-b137-24f772f5d61c" path="/var/lib/kubelet/pods/e0b2bd53-a874-4d92-b137-24f772f5d61c/volumes" Sep 30 10:07:02 crc kubenswrapper[4970]: I0930 10:07:02.143640 4970 generic.go:334] "Generic (PLEG): container finished" podID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerID="533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394" exitCode=0 Sep 30 10:07:02 crc kubenswrapper[4970]: I0930 10:07:02.144085 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" event={"ID":"c4cf9b39-abf8-4c7c-a396-af60670fd4ed","Type":"ContainerDied","Data":"533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394"} Sep 30 10:07:02 crc kubenswrapper[4970]: I0930 10:07:02.144223 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" event={"ID":"c4cf9b39-abf8-4c7c-a396-af60670fd4ed","Type":"ContainerStarted","Data":"45a84bc14481de70fe18b669c8f6ae905198dd7eeb558d6f2a799a381b6ad7e7"} Sep 30 10:07:02 crc kubenswrapper[4970]: I0930 10:07:02.146385 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7690d385-7ca5-472c-8ee7-5c3aa4030951","Type":"ContainerStarted","Data":"84b47958e7e48331a6e623326893b95d34ce713193d8fe42df39d08da4151c22"} Sep 30 10:07:02 crc kubenswrapper[4970]: I0930 10:07:02.146413 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7690d385-7ca5-472c-8ee7-5c3aa4030951","Type":"ContainerStarted","Data":"493ff832f7fbbc4784030b1c5911811b91923b1ab166e5098951f10c69c0b312"} Sep 30 10:07:03 crc kubenswrapper[4970]: I0930 10:07:03.161023 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" event={"ID":"c4cf9b39-abf8-4c7c-a396-af60670fd4ed","Type":"ContainerStarted","Data":"de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f"} Sep 30 10:07:03 crc kubenswrapper[4970]: I0930 10:07:03.194166 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" podStartSLOduration=3.194143703 podStartE2EDuration="3.194143703s" podCreationTimestamp="2025-09-30 10:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:07:03.184422637 +0000 UTC m=+1236.256273591" watchObservedRunningTime="2025-09-30 10:07:03.194143703 +0000 UTC m=+1236.265994647" Sep 30 10:07:04 crc kubenswrapper[4970]: I0930 10:07:04.167518 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:04 crc kubenswrapper[4970]: I0930 10:07:04.821694 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:07:04 crc kubenswrapper[4970]: I0930 10:07:04.822093 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.000145 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.063298 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-ksk7j"] Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.063649 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" podUID="729849c2-5efc-43c1-841e-a971a5739723" containerName="dnsmasq-dns" containerID="cri-o://c79b00fa9fa4db26f56d8b94fecccb75e09385d347669435f249c30097b62581" gracePeriod=10 Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.207319 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-6bk6t"] Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.219869 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.222483 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-6bk6t"] Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.246882 4970 generic.go:334] "Generic (PLEG): container finished" podID="729849c2-5efc-43c1-841e-a971a5739723" containerID="c79b00fa9fa4db26f56d8b94fecccb75e09385d347669435f249c30097b62581" exitCode=0 Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.246924 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" event={"ID":"729849c2-5efc-43c1-841e-a971a5739723","Type":"ContainerDied","Data":"c79b00fa9fa4db26f56d8b94fecccb75e09385d347669435f249c30097b62581"} Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345135 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-config\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345177 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xmd5\" (UniqueName: \"kubernetes.io/projected/90db2984-4178-487d-812f-a51f345ae911-kube-api-access-2xmd5\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345236 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345284 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345338 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345355 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.345387 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448501 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448550 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448584 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448617 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-config\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xmd5\" (UniqueName: \"kubernetes.io/projected/90db2984-4178-487d-812f-a51f345ae911-kube-api-access-2xmd5\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448701 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.448750 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.450138 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-openstack-edpm-ipam\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.450199 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-dns-swift-storage-0\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.450237 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-dns-svc\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.450277 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-ovsdbserver-nb\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.450584 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-ovsdbserver-sb\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.450705 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90db2984-4178-487d-812f-a51f345ae911-config\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.470314 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xmd5\" (UniqueName: \"kubernetes.io/projected/90db2984-4178-487d-812f-a51f345ae911-kube-api-access-2xmd5\") pod \"dnsmasq-dns-8c6f6df99-6bk6t\" (UID: \"90db2984-4178-487d-812f-a51f345ae911\") " pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.555371 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.583333 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.651498 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-svc\") pod \"729849c2-5efc-43c1-841e-a971a5739723\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.651597 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-swift-storage-0\") pod \"729849c2-5efc-43c1-841e-a971a5739723\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.651651 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw58j\" (UniqueName: \"kubernetes.io/projected/729849c2-5efc-43c1-841e-a971a5739723-kube-api-access-pw58j\") pod \"729849c2-5efc-43c1-841e-a971a5739723\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.651689 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-config\") pod \"729849c2-5efc-43c1-841e-a971a5739723\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.651747 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-sb\") pod \"729849c2-5efc-43c1-841e-a971a5739723\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.651861 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-nb\") pod \"729849c2-5efc-43c1-841e-a971a5739723\" (UID: \"729849c2-5efc-43c1-841e-a971a5739723\") " Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.656587 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/729849c2-5efc-43c1-841e-a971a5739723-kube-api-access-pw58j" (OuterVolumeSpecName: "kube-api-access-pw58j") pod "729849c2-5efc-43c1-841e-a971a5739723" (UID: "729849c2-5efc-43c1-841e-a971a5739723"). InnerVolumeSpecName "kube-api-access-pw58j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.720152 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-config" (OuterVolumeSpecName: "config") pod "729849c2-5efc-43c1-841e-a971a5739723" (UID: "729849c2-5efc-43c1-841e-a971a5739723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.730471 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "729849c2-5efc-43c1-841e-a971a5739723" (UID: "729849c2-5efc-43c1-841e-a971a5739723"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.736064 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "729849c2-5efc-43c1-841e-a971a5739723" (UID: "729849c2-5efc-43c1-841e-a971a5739723"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.738641 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "729849c2-5efc-43c1-841e-a971a5739723" (UID: "729849c2-5efc-43c1-841e-a971a5739723"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.751563 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "729849c2-5efc-43c1-841e-a971a5739723" (UID: "729849c2-5efc-43c1-841e-a971a5739723"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.755336 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.755478 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw58j\" (UniqueName: \"kubernetes.io/projected/729849c2-5efc-43c1-841e-a971a5739723-kube-api-access-pw58j\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.755536 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.755629 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.755686 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:11 crc kubenswrapper[4970]: I0930 10:07:11.755745 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/729849c2-5efc-43c1-841e-a971a5739723-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.014847 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8c6f6df99-6bk6t"] Sep 30 10:07:12 crc kubenswrapper[4970]: W0930 10:07:12.021463 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90db2984_4178_487d_812f_a51f345ae911.slice/crio-726266968e5256fcbc7e92d32e9b5cb3684e27d96a31930047c3898abf35ee48 WatchSource:0}: Error finding container 726266968e5256fcbc7e92d32e9b5cb3684e27d96a31930047c3898abf35ee48: Status 404 returned error can't find the container with id 726266968e5256fcbc7e92d32e9b5cb3684e27d96a31930047c3898abf35ee48 Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.260963 4970 generic.go:334] "Generic (PLEG): container finished" podID="90db2984-4178-487d-812f-a51f345ae911" containerID="57e8f7fdbce0f05e4fa45013062c5677a3d776e1d8af6ce28bc23d3338d799c7" exitCode=0 Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.261016 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" event={"ID":"90db2984-4178-487d-812f-a51f345ae911","Type":"ContainerDied","Data":"57e8f7fdbce0f05e4fa45013062c5677a3d776e1d8af6ce28bc23d3338d799c7"} Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.261060 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" event={"ID":"90db2984-4178-487d-812f-a51f345ae911","Type":"ContainerStarted","Data":"726266968e5256fcbc7e92d32e9b5cb3684e27d96a31930047c3898abf35ee48"} Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.263562 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" event={"ID":"729849c2-5efc-43c1-841e-a971a5739723","Type":"ContainerDied","Data":"e4342cc9ddeda49df0bfb374630254b8416f90968624bae13343a1465375543e"} Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.263595 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-ksk7j" Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.263605 4970 scope.go:117] "RemoveContainer" containerID="c79b00fa9fa4db26f56d8b94fecccb75e09385d347669435f249c30097b62581" Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.443823 4970 scope.go:117] "RemoveContainer" containerID="039b546644d8b91b940fc49386206766abfd36717f8c1a45e8173dd303793bb9" Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.477673 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-ksk7j"] Sep 30 10:07:12 crc kubenswrapper[4970]: I0930 10:07:12.488055 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-ksk7j"] Sep 30 10:07:13 crc kubenswrapper[4970]: I0930 10:07:13.275012 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" event={"ID":"90db2984-4178-487d-812f-a51f345ae911","Type":"ContainerStarted","Data":"0b3e0fd3fd89e944e47c93db5eb28c609c2b71f406c41126fd14b6c53403201b"} Sep 30 10:07:13 crc kubenswrapper[4970]: I0930 10:07:13.275265 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:13 crc kubenswrapper[4970]: I0930 10:07:13.315030 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" podStartSLOduration=2.315009802 podStartE2EDuration="2.315009802s" podCreationTimestamp="2025-09-30 10:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:07:13.310706894 +0000 UTC m=+1246.382557828" watchObservedRunningTime="2025-09-30 10:07:13.315009802 +0000 UTC m=+1246.386860736" Sep 30 10:07:13 crc kubenswrapper[4970]: I0930 10:07:13.682730 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="729849c2-5efc-43c1-841e-a971a5739723" path="/var/lib/kubelet/pods/729849c2-5efc-43c1-841e-a971a5739723/volumes" Sep 30 10:07:21 crc kubenswrapper[4970]: I0930 10:07:21.585216 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8c6f6df99-6bk6t" Sep 30 10:07:21 crc kubenswrapper[4970]: I0930 10:07:21.694937 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vw9dr"] Sep 30 10:07:21 crc kubenswrapper[4970]: I0930 10:07:21.695343 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerName="dnsmasq-dns" containerID="cri-o://de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f" gracePeriod=10 Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.199298 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.371821 4970 generic.go:334] "Generic (PLEG): container finished" podID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerID="de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f" exitCode=0 Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.371871 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" event={"ID":"c4cf9b39-abf8-4c7c-a396-af60670fd4ed","Type":"ContainerDied","Data":"de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f"} Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.371888 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.371909 4970 scope.go:117] "RemoveContainer" containerID="de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.371897 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5576978c7c-vw9dr" event={"ID":"c4cf9b39-abf8-4c7c-a396-af60670fd4ed","Type":"ContainerDied","Data":"45a84bc14481de70fe18b669c8f6ae905198dd7eeb558d6f2a799a381b6ad7e7"} Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.383574 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-openstack-edpm-ipam\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.383684 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-nb\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.383744 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-swift-storage-0\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.383883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-svc\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.383957 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-config\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.384114 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szwcs\" (UniqueName: \"kubernetes.io/projected/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-kube-api-access-szwcs\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.384150 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-sb\") pod \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\" (UID: \"c4cf9b39-abf8-4c7c-a396-af60670fd4ed\") " Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.403265 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-kube-api-access-szwcs" (OuterVolumeSpecName: "kube-api-access-szwcs") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "kube-api-access-szwcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.417318 4970 scope.go:117] "RemoveContainer" containerID="533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.448298 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.458786 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.464411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-config" (OuterVolumeSpecName: "config") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.466875 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.468922 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.474095 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c4cf9b39-abf8-4c7c-a396-af60670fd4ed" (UID: "c4cf9b39-abf8-4c7c-a396-af60670fd4ed"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.486963 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.487027 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.487037 4970 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.487048 4970 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-dns-svc\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.487062 4970 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.487076 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szwcs\" (UniqueName: \"kubernetes.io/projected/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-kube-api-access-szwcs\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.487088 4970 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c4cf9b39-abf8-4c7c-a396-af60670fd4ed-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.514313 4970 scope.go:117] "RemoveContainer" containerID="de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f" Sep 30 10:07:22 crc kubenswrapper[4970]: E0930 10:07:22.514985 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f\": container with ID starting with de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f not found: ID does not exist" containerID="de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.515061 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f"} err="failed to get container status \"de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f\": rpc error: code = NotFound desc = could not find container \"de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f\": container with ID starting with de99bf9b6303645818ad88fe2592e7b139a43a69b44fdfd84531b4ef52b9829f not found: ID does not exist" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.515097 4970 scope.go:117] "RemoveContainer" containerID="533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394" Sep 30 10:07:22 crc kubenswrapper[4970]: E0930 10:07:22.515505 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394\": container with ID starting with 533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394 not found: ID does not exist" containerID="533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.515529 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394"} err="failed to get container status \"533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394\": rpc error: code = NotFound desc = could not find container \"533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394\": container with ID starting with 533c4b99e9c98cc60f106b653383184142733ad2a48f9615669be790b7e40394 not found: ID does not exist" Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.729339 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vw9dr"] Sep 30 10:07:22 crc kubenswrapper[4970]: I0930 10:07:22.737943 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5576978c7c-vw9dr"] Sep 30 10:07:23 crc kubenswrapper[4970]: I0930 10:07:23.681482 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" path="/var/lib/kubelet/pods/c4cf9b39-abf8-4c7c-a396-af60670fd4ed/volumes" Sep 30 10:07:30 crc kubenswrapper[4970]: I0930 10:07:30.489193 4970 generic.go:334] "Generic (PLEG): container finished" podID="4582e248-17bf-40bb-9072-f64d72d1fc82" containerID="b62feecd1b67740be8444e99ac86bcee822905917b5ab4daaeeafdc3d4239d79" exitCode=0 Sep 30 10:07:30 crc kubenswrapper[4970]: I0930 10:07:30.489366 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4582e248-17bf-40bb-9072-f64d72d1fc82","Type":"ContainerDied","Data":"b62feecd1b67740be8444e99ac86bcee822905917b5ab4daaeeafdc3d4239d79"} Sep 30 10:07:31 crc kubenswrapper[4970]: I0930 10:07:31.502851 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4582e248-17bf-40bb-9072-f64d72d1fc82","Type":"ContainerStarted","Data":"fa0d62f05bfa44af359cc1a7aeaf0b91dd8b696bbb162a1f28666c746e59956b"} Sep 30 10:07:31 crc kubenswrapper[4970]: I0930 10:07:31.503502 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Sep 30 10:07:31 crc kubenswrapper[4970]: I0930 10:07:31.542663 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=33.542643146 podStartE2EDuration="33.542643146s" podCreationTimestamp="2025-09-30 10:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:07:31.537311109 +0000 UTC m=+1264.609162093" watchObservedRunningTime="2025-09-30 10:07:31.542643146 +0000 UTC m=+1264.614494080" Sep 30 10:07:32 crc kubenswrapper[4970]: I0930 10:07:32.513423 4970 generic.go:334] "Generic (PLEG): container finished" podID="7690d385-7ca5-472c-8ee7-5c3aa4030951" containerID="84b47958e7e48331a6e623326893b95d34ce713193d8fe42df39d08da4151c22" exitCode=0 Sep 30 10:07:32 crc kubenswrapper[4970]: I0930 10:07:32.514816 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7690d385-7ca5-472c-8ee7-5c3aa4030951","Type":"ContainerDied","Data":"84b47958e7e48331a6e623326893b95d34ce713193d8fe42df39d08da4151c22"} Sep 30 10:07:33 crc kubenswrapper[4970]: I0930 10:07:33.523940 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7690d385-7ca5-472c-8ee7-5c3aa4030951","Type":"ContainerStarted","Data":"8960fecaddce787d33c444fe95c4c6d5f55afcf8aca40e6fc6a2fb479d4da20a"} Sep 30 10:07:33 crc kubenswrapper[4970]: I0930 10:07:33.524672 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:33 crc kubenswrapper[4970]: I0930 10:07:33.547020 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=33.546981859 podStartE2EDuration="33.546981859s" podCreationTimestamp="2025-09-30 10:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:07:33.544613273 +0000 UTC m=+1266.616464247" watchObservedRunningTime="2025-09-30 10:07:33.546981859 +0000 UTC m=+1266.618832793" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.821914 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.822579 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.822649 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.823850 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39953ec91959e4f49ceb3ed4f3dfdbe1d7a2e025175d7df609efe49614e86013"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.823925 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://39953ec91959e4f49ceb3ed4f3dfdbe1d7a2e025175d7df609efe49614e86013" gracePeriod=600 Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.919772 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr"] Sep 30 10:07:34 crc kubenswrapper[4970]: E0930 10:07:34.920438 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729849c2-5efc-43c1-841e-a971a5739723" containerName="dnsmasq-dns" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.920466 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="729849c2-5efc-43c1-841e-a971a5739723" containerName="dnsmasq-dns" Sep 30 10:07:34 crc kubenswrapper[4970]: E0930 10:07:34.920489 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerName="init" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.920497 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerName="init" Sep 30 10:07:34 crc kubenswrapper[4970]: E0930 10:07:34.920520 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="729849c2-5efc-43c1-841e-a971a5739723" containerName="init" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.920528 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="729849c2-5efc-43c1-841e-a971a5739723" containerName="init" Sep 30 10:07:34 crc kubenswrapper[4970]: E0930 10:07:34.920554 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerName="dnsmasq-dns" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.920560 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerName="dnsmasq-dns" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.920762 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4cf9b39-abf8-4c7c-a396-af60670fd4ed" containerName="dnsmasq-dns" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.920785 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="729849c2-5efc-43c1-841e-a971a5739723" containerName="dnsmasq-dns" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.921732 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.926106 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.928450 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.929179 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.929447 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.958406 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.958479 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5l5\" (UniqueName: \"kubernetes.io/projected/ebfb47a0-7f15-4839-b84d-7aef631222f8-kube-api-access-ts5l5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.958548 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.958619 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:34 crc kubenswrapper[4970]: I0930 10:07:34.998245 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr"] Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.061467 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.061561 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.061654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.061685 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts5l5\" (UniqueName: \"kubernetes.io/projected/ebfb47a0-7f15-4839-b84d-7aef631222f8-kube-api-access-ts5l5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.078359 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.078651 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.078945 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.087862 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts5l5\" (UniqueName: \"kubernetes.io/projected/ebfb47a0-7f15-4839-b84d-7aef631222f8-kube-api-access-ts5l5\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.317704 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.547736 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="39953ec91959e4f49ceb3ed4f3dfdbe1d7a2e025175d7df609efe49614e86013" exitCode=0 Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.547816 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"39953ec91959e4f49ceb3ed4f3dfdbe1d7a2e025175d7df609efe49614e86013"} Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.548058 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913"} Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.548083 4970 scope.go:117] "RemoveContainer" containerID="7e145a3822b96bea172852c66d616934dcbd854c3400bad7729b559f1279373d" Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.930536 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr"] Sep 30 10:07:35 crc kubenswrapper[4970]: W0930 10:07:35.934630 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebfb47a0_7f15_4839_b84d_7aef631222f8.slice/crio-b0aaa2218dd0b023fb11ff70c2f647081ae41b82e098d2813fe1d8e676d5a298 WatchSource:0}: Error finding container b0aaa2218dd0b023fb11ff70c2f647081ae41b82e098d2813fe1d8e676d5a298: Status 404 returned error can't find the container with id b0aaa2218dd0b023fb11ff70c2f647081ae41b82e098d2813fe1d8e676d5a298 Sep 30 10:07:35 crc kubenswrapper[4970]: I0930 10:07:35.937492 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:07:36 crc kubenswrapper[4970]: I0930 10:07:36.569127 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" event={"ID":"ebfb47a0-7f15-4839-b84d-7aef631222f8","Type":"ContainerStarted","Data":"b0aaa2218dd0b023fb11ff70c2f647081ae41b82e098d2813fe1d8e676d5a298"} Sep 30 10:07:44 crc kubenswrapper[4970]: I0930 10:07:44.677810 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" event={"ID":"ebfb47a0-7f15-4839-b84d-7aef631222f8","Type":"ContainerStarted","Data":"5d05f6b604aca22d78373b4332c0e8b55ef3bb7e586206f0d320d36b4f13b175"} Sep 30 10:07:44 crc kubenswrapper[4970]: I0930 10:07:44.703200 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" podStartSLOduration=2.337227816 podStartE2EDuration="10.703177883s" podCreationTimestamp="2025-09-30 10:07:34 +0000 UTC" firstStartedPulling="2025-09-30 10:07:35.937267712 +0000 UTC m=+1269.009118646" lastFinishedPulling="2025-09-30 10:07:44.303217779 +0000 UTC m=+1277.375068713" observedRunningTime="2025-09-30 10:07:44.694723851 +0000 UTC m=+1277.766574795" watchObservedRunningTime="2025-09-30 10:07:44.703177883 +0000 UTC m=+1277.775028817" Sep 30 10:07:48 crc kubenswrapper[4970]: I0930 10:07:48.514209 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Sep 30 10:07:50 crc kubenswrapper[4970]: I0930 10:07:50.560259 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Sep 30 10:07:55 crc kubenswrapper[4970]: I0930 10:07:55.794959 4970 generic.go:334] "Generic (PLEG): container finished" podID="ebfb47a0-7f15-4839-b84d-7aef631222f8" containerID="5d05f6b604aca22d78373b4332c0e8b55ef3bb7e586206f0d320d36b4f13b175" exitCode=0 Sep 30 10:07:55 crc kubenswrapper[4970]: I0930 10:07:55.795024 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" event={"ID":"ebfb47a0-7f15-4839-b84d-7aef631222f8","Type":"ContainerDied","Data":"5d05f6b604aca22d78373b4332c0e8b55ef3bb7e586206f0d320d36b4f13b175"} Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.293106 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.427646 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-inventory\") pod \"ebfb47a0-7f15-4839-b84d-7aef631222f8\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.427908 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-repo-setup-combined-ca-bundle\") pod \"ebfb47a0-7f15-4839-b84d-7aef631222f8\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.428008 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts5l5\" (UniqueName: \"kubernetes.io/projected/ebfb47a0-7f15-4839-b84d-7aef631222f8-kube-api-access-ts5l5\") pod \"ebfb47a0-7f15-4839-b84d-7aef631222f8\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.428077 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-ssh-key\") pod \"ebfb47a0-7f15-4839-b84d-7aef631222f8\" (UID: \"ebfb47a0-7f15-4839-b84d-7aef631222f8\") " Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.437191 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "ebfb47a0-7f15-4839-b84d-7aef631222f8" (UID: "ebfb47a0-7f15-4839-b84d-7aef631222f8"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.437199 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebfb47a0-7f15-4839-b84d-7aef631222f8-kube-api-access-ts5l5" (OuterVolumeSpecName: "kube-api-access-ts5l5") pod "ebfb47a0-7f15-4839-b84d-7aef631222f8" (UID: "ebfb47a0-7f15-4839-b84d-7aef631222f8"). InnerVolumeSpecName "kube-api-access-ts5l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.461133 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ebfb47a0-7f15-4839-b84d-7aef631222f8" (UID: "ebfb47a0-7f15-4839-b84d-7aef631222f8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.462507 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-inventory" (OuterVolumeSpecName: "inventory") pod "ebfb47a0-7f15-4839-b84d-7aef631222f8" (UID: "ebfb47a0-7f15-4839-b84d-7aef631222f8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.529941 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.529976 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.529999 4970 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebfb47a0-7f15-4839-b84d-7aef631222f8-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.530014 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts5l5\" (UniqueName: \"kubernetes.io/projected/ebfb47a0-7f15-4839-b84d-7aef631222f8-kube-api-access-ts5l5\") on node \"crc\" DevicePath \"\"" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.822410 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" event={"ID":"ebfb47a0-7f15-4839-b84d-7aef631222f8","Type":"ContainerDied","Data":"b0aaa2218dd0b023fb11ff70c2f647081ae41b82e098d2813fe1d8e676d5a298"} Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.822452 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0aaa2218dd0b023fb11ff70c2f647081ae41b82e098d2813fe1d8e676d5a298" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.822500 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.895690 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz"] Sep 30 10:07:57 crc kubenswrapper[4970]: E0930 10:07:57.896385 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebfb47a0-7f15-4839-b84d-7aef631222f8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.896423 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebfb47a0-7f15-4839-b84d-7aef631222f8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.896766 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebfb47a0-7f15-4839-b84d-7aef631222f8" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.897897 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.900196 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.900408 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.900582 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.901847 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:07:57 crc kubenswrapper[4970]: I0930 10:07:57.906132 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz"] Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.039823 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tz9j\" (UniqueName: \"kubernetes.io/projected/a934a1c0-31ef-4341-a85f-a13cd865adc1-kube-api-access-6tz9j\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.039951 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.039979 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.141924 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.141970 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.142075 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tz9j\" (UniqueName: \"kubernetes.io/projected/a934a1c0-31ef-4341-a85f-a13cd865adc1-kube-api-access-6tz9j\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.146500 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.151805 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.190064 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tz9j\" (UniqueName: \"kubernetes.io/projected/a934a1c0-31ef-4341-a85f-a13cd865adc1-kube-api-access-6tz9j\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-bbkpz\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.220374 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.799132 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz"] Sep 30 10:07:58 crc kubenswrapper[4970]: I0930 10:07:58.835479 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" event={"ID":"a934a1c0-31ef-4341-a85f-a13cd865adc1","Type":"ContainerStarted","Data":"0adf2ca76892d046190e562a62aba0e98bfab0a89400b89844c9675eaa6f7f38"} Sep 30 10:07:59 crc kubenswrapper[4970]: I0930 10:07:59.847022 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" event={"ID":"a934a1c0-31ef-4341-a85f-a13cd865adc1","Type":"ContainerStarted","Data":"2827256ed3abf4de05def1816f0f6e384a835c53de0a625b33a2e179748ecc24"} Sep 30 10:07:59 crc kubenswrapper[4970]: I0930 10:07:59.869187 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" podStartSLOduration=2.27020938 podStartE2EDuration="2.869167354s" podCreationTimestamp="2025-09-30 10:07:57 +0000 UTC" firstStartedPulling="2025-09-30 10:07:58.809230532 +0000 UTC m=+1291.881081456" lastFinishedPulling="2025-09-30 10:07:59.408188496 +0000 UTC m=+1292.480039430" observedRunningTime="2025-09-30 10:07:59.864258349 +0000 UTC m=+1292.936109303" watchObservedRunningTime="2025-09-30 10:07:59.869167354 +0000 UTC m=+1292.941018288" Sep 30 10:08:02 crc kubenswrapper[4970]: I0930 10:08:02.873740 4970 generic.go:334] "Generic (PLEG): container finished" podID="a934a1c0-31ef-4341-a85f-a13cd865adc1" containerID="2827256ed3abf4de05def1816f0f6e384a835c53de0a625b33a2e179748ecc24" exitCode=0 Sep 30 10:08:02 crc kubenswrapper[4970]: I0930 10:08:02.873838 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" event={"ID":"a934a1c0-31ef-4341-a85f-a13cd865adc1","Type":"ContainerDied","Data":"2827256ed3abf4de05def1816f0f6e384a835c53de0a625b33a2e179748ecc24"} Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.350631 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.461020 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-inventory\") pod \"a934a1c0-31ef-4341-a85f-a13cd865adc1\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.461151 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-ssh-key\") pod \"a934a1c0-31ef-4341-a85f-a13cd865adc1\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.461744 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tz9j\" (UniqueName: \"kubernetes.io/projected/a934a1c0-31ef-4341-a85f-a13cd865adc1-kube-api-access-6tz9j\") pod \"a934a1c0-31ef-4341-a85f-a13cd865adc1\" (UID: \"a934a1c0-31ef-4341-a85f-a13cd865adc1\") " Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.466635 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a934a1c0-31ef-4341-a85f-a13cd865adc1-kube-api-access-6tz9j" (OuterVolumeSpecName: "kube-api-access-6tz9j") pod "a934a1c0-31ef-4341-a85f-a13cd865adc1" (UID: "a934a1c0-31ef-4341-a85f-a13cd865adc1"). InnerVolumeSpecName "kube-api-access-6tz9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.491740 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a934a1c0-31ef-4341-a85f-a13cd865adc1" (UID: "a934a1c0-31ef-4341-a85f-a13cd865adc1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.492811 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-inventory" (OuterVolumeSpecName: "inventory") pod "a934a1c0-31ef-4341-a85f-a13cd865adc1" (UID: "a934a1c0-31ef-4341-a85f-a13cd865adc1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.564015 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.564090 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a934a1c0-31ef-4341-a85f-a13cd865adc1-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.564108 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tz9j\" (UniqueName: \"kubernetes.io/projected/a934a1c0-31ef-4341-a85f-a13cd865adc1-kube-api-access-6tz9j\") on node \"crc\" DevicePath \"\"" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.892176 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" event={"ID":"a934a1c0-31ef-4341-a85f-a13cd865adc1","Type":"ContainerDied","Data":"0adf2ca76892d046190e562a62aba0e98bfab0a89400b89844c9675eaa6f7f38"} Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.892632 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0adf2ca76892d046190e562a62aba0e98bfab0a89400b89844c9675eaa6f7f38" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.892231 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-bbkpz" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.974690 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9"] Sep 30 10:08:04 crc kubenswrapper[4970]: E0930 10:08:04.975191 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a934a1c0-31ef-4341-a85f-a13cd865adc1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.975213 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="a934a1c0-31ef-4341-a85f-a13cd865adc1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.975483 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="a934a1c0-31ef-4341-a85f-a13cd865adc1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.976304 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.978969 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.979109 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.979191 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:08:04 crc kubenswrapper[4970]: I0930 10:08:04.981446 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.001155 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9"] Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.073502 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.073551 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.073605 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.073662 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49d8t\" (UniqueName: \"kubernetes.io/projected/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-kube-api-access-49d8t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.174850 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.175141 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.175255 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.175389 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49d8t\" (UniqueName: \"kubernetes.io/projected/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-kube-api-access-49d8t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.179584 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.181544 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.181673 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.195644 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49d8t\" (UniqueName: \"kubernetes.io/projected/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-kube-api-access-49d8t\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.296411 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.830668 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9"] Sep 30 10:08:05 crc kubenswrapper[4970]: I0930 10:08:05.902127 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" event={"ID":"f4b4ad42-77b6-450b-befa-9bb0012fe9ae","Type":"ContainerStarted","Data":"13c6b63e59595e59996808cae36dfda447d3404def33d40cadf4ad1ae6302b56"} Sep 30 10:08:06 crc kubenswrapper[4970]: I0930 10:08:06.916979 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" event={"ID":"f4b4ad42-77b6-450b-befa-9bb0012fe9ae","Type":"ContainerStarted","Data":"028347c01212cbd154ba4d630ab665722bbc2630a14a21ed125775d95754e366"} Sep 30 10:08:06 crc kubenswrapper[4970]: I0930 10:08:06.948843 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" podStartSLOduration=2.550588673 podStartE2EDuration="2.94882172s" podCreationTimestamp="2025-09-30 10:08:04 +0000 UTC" firstStartedPulling="2025-09-30 10:08:05.836461499 +0000 UTC m=+1298.908312433" lastFinishedPulling="2025-09-30 10:08:06.234694536 +0000 UTC m=+1299.306545480" observedRunningTime="2025-09-30 10:08:06.944256134 +0000 UTC m=+1300.016107078" watchObservedRunningTime="2025-09-30 10:08:06.94882172 +0000 UTC m=+1300.020672654" Sep 30 10:08:51 crc kubenswrapper[4970]: I0930 10:08:51.521425 4970 scope.go:117] "RemoveContainer" containerID="7717e1ebe633b757f10a61fba4e43c8f70e159350e71e4f2083ae1a532b1ce42" Sep 30 10:08:51 crc kubenswrapper[4970]: I0930 10:08:51.544332 4970 scope.go:117] "RemoveContainer" containerID="fd9724582e291160c5c6e6de21954648ffcb91e3259574943b5aedcba22f4786" Sep 30 10:08:51 crc kubenswrapper[4970]: I0930 10:08:51.616067 4970 scope.go:117] "RemoveContainer" containerID="36ca95c0bced24be37cb08569f86c977c97caa9f1602bce9e6315a4c0b6d22f6" Sep 30 10:08:51 crc kubenswrapper[4970]: I0930 10:08:51.652357 4970 scope.go:117] "RemoveContainer" containerID="93ee17f7bc5c56770e9db93758388a717b9debb96818c5240c4b90b4a41ac9fd" Sep 30 10:09:51 crc kubenswrapper[4970]: I0930 10:09:51.746350 4970 scope.go:117] "RemoveContainer" containerID="bbbcbafe272e447b4084d9993c023ef5f4944523ec900f99923215acfd440951" Sep 30 10:09:57 crc kubenswrapper[4970]: I0930 10:09:57.775365 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-h5gwk"] Sep 30 10:09:57 crc kubenswrapper[4970]: I0930 10:09:57.780773 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:57 crc kubenswrapper[4970]: I0930 10:09:57.786463 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5gwk"] Sep 30 10:09:57 crc kubenswrapper[4970]: I0930 10:09:57.913384 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cz48\" (UniqueName: \"kubernetes.io/projected/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-kube-api-access-5cz48\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:57 crc kubenswrapper[4970]: I0930 10:09:57.913472 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-utilities\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:57 crc kubenswrapper[4970]: I0930 10:09:57.913528 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-catalog-content\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.015375 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-utilities\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.015482 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-catalog-content\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.015594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cz48\" (UniqueName: \"kubernetes.io/projected/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-kube-api-access-5cz48\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.033776 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-catalog-content\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.033907 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-utilities\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.065183 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cz48\" (UniqueName: \"kubernetes.io/projected/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-kube-api-access-5cz48\") pod \"community-operators-h5gwk\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.113668 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:09:58 crc kubenswrapper[4970]: I0930 10:09:58.652541 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-h5gwk"] Sep 30 10:09:59 crc kubenswrapper[4970]: I0930 10:09:59.020942 4970 generic.go:334] "Generic (PLEG): container finished" podID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerID="91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8" exitCode=0 Sep 30 10:09:59 crc kubenswrapper[4970]: I0930 10:09:59.021040 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerDied","Data":"91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8"} Sep 30 10:09:59 crc kubenswrapper[4970]: I0930 10:09:59.022410 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerStarted","Data":"b0916d66f1565be040702d87efafed56cbfb47657f7c305a89203c1bd6010b5b"} Sep 30 10:10:00 crc kubenswrapper[4970]: I0930 10:10:00.032488 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerStarted","Data":"93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51"} Sep 30 10:10:01 crc kubenswrapper[4970]: I0930 10:10:01.042734 4970 generic.go:334] "Generic (PLEG): container finished" podID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerID="93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51" exitCode=0 Sep 30 10:10:01 crc kubenswrapper[4970]: I0930 10:10:01.042940 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerDied","Data":"93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51"} Sep 30 10:10:02 crc kubenswrapper[4970]: I0930 10:10:02.053386 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerStarted","Data":"a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680"} Sep 30 10:10:02 crc kubenswrapper[4970]: I0930 10:10:02.082080 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-h5gwk" podStartSLOduration=2.458500889 podStartE2EDuration="5.082062766s" podCreationTimestamp="2025-09-30 10:09:57 +0000 UTC" firstStartedPulling="2025-09-30 10:09:59.023337278 +0000 UTC m=+1412.095188212" lastFinishedPulling="2025-09-30 10:10:01.646899155 +0000 UTC m=+1414.718750089" observedRunningTime="2025-09-30 10:10:02.077458949 +0000 UTC m=+1415.149309893" watchObservedRunningTime="2025-09-30 10:10:02.082062766 +0000 UTC m=+1415.153913700" Sep 30 10:10:04 crc kubenswrapper[4970]: I0930 10:10:04.821301 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:10:04 crc kubenswrapper[4970]: I0930 10:10:04.821859 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:10:08 crc kubenswrapper[4970]: I0930 10:10:08.113948 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:10:08 crc kubenswrapper[4970]: I0930 10:10:08.114658 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:10:08 crc kubenswrapper[4970]: I0930 10:10:08.209543 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:10:08 crc kubenswrapper[4970]: I0930 10:10:08.284653 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:10:08 crc kubenswrapper[4970]: I0930 10:10:08.468170 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5gwk"] Sep 30 10:10:10 crc kubenswrapper[4970]: I0930 10:10:10.154705 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-h5gwk" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="registry-server" containerID="cri-o://a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680" gracePeriod=2 Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.154745 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.168361 4970 generic.go:334] "Generic (PLEG): container finished" podID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerID="a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680" exitCode=0 Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.168414 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerDied","Data":"a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680"} Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.168461 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-h5gwk" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.168485 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-h5gwk" event={"ID":"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80","Type":"ContainerDied","Data":"b0916d66f1565be040702d87efafed56cbfb47657f7c305a89203c1bd6010b5b"} Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.168519 4970 scope.go:117] "RemoveContainer" containerID="a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.249812 4970 scope.go:117] "RemoveContainer" containerID="93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.272857 4970 scope.go:117] "RemoveContainer" containerID="91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.290638 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cz48\" (UniqueName: \"kubernetes.io/projected/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-kube-api-access-5cz48\") pod \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.290879 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-catalog-content\") pod \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.290940 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-utilities\") pod \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\" (UID: \"fdfd4189-fdfb-41ce-b9ba-76c3e5353a80\") " Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.292893 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-utilities" (OuterVolumeSpecName: "utilities") pod "fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" (UID: "fdfd4189-fdfb-41ce-b9ba-76c3e5353a80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.297605 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-kube-api-access-5cz48" (OuterVolumeSpecName: "kube-api-access-5cz48") pod "fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" (UID: "fdfd4189-fdfb-41ce-b9ba-76c3e5353a80"). InnerVolumeSpecName "kube-api-access-5cz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.344534 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" (UID: "fdfd4189-fdfb-41ce-b9ba-76c3e5353a80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.359872 4970 scope.go:117] "RemoveContainer" containerID="a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680" Sep 30 10:10:11 crc kubenswrapper[4970]: E0930 10:10:11.360548 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680\": container with ID starting with a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680 not found: ID does not exist" containerID="a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.360597 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680"} err="failed to get container status \"a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680\": rpc error: code = NotFound desc = could not find container \"a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680\": container with ID starting with a17265837c0dbf9391298beb18cb6ba68ffc355baa719f755d4698011986b680 not found: ID does not exist" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.360641 4970 scope.go:117] "RemoveContainer" containerID="93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51" Sep 30 10:10:11 crc kubenswrapper[4970]: E0930 10:10:11.361114 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51\": container with ID starting with 93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51 not found: ID does not exist" containerID="93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.361162 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51"} err="failed to get container status \"93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51\": rpc error: code = NotFound desc = could not find container \"93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51\": container with ID starting with 93607a932255158fc2ab51124f92312a9dd84707d6087ac6a58dcda214094c51 not found: ID does not exist" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.361197 4970 scope.go:117] "RemoveContainer" containerID="91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8" Sep 30 10:10:11 crc kubenswrapper[4970]: E0930 10:10:11.361630 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8\": container with ID starting with 91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8 not found: ID does not exist" containerID="91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.361656 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8"} err="failed to get container status \"91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8\": rpc error: code = NotFound desc = could not find container \"91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8\": container with ID starting with 91e52b4cb511a62895a8f358794a1039700564c34a3739657e4eb3358f3341d8 not found: ID does not exist" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.393766 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cz48\" (UniqueName: \"kubernetes.io/projected/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-kube-api-access-5cz48\") on node \"crc\" DevicePath \"\"" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.393801 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.393814 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.513306 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-h5gwk"] Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.524101 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-h5gwk"] Sep 30 10:10:11 crc kubenswrapper[4970]: I0930 10:10:11.678539 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" path="/var/lib/kubelet/pods/fdfd4189-fdfb-41ce-b9ba-76c3e5353a80/volumes" Sep 30 10:10:34 crc kubenswrapper[4970]: I0930 10:10:34.821393 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:10:34 crc kubenswrapper[4970]: I0930 10:10:34.821949 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:10:51 crc kubenswrapper[4970]: I0930 10:10:51.842177 4970 scope.go:117] "RemoveContainer" containerID="b96cfd4d2eddd1faa5f6cdb248bf5fa76ce3d0c6b60f5f1b59d7b0ed64e00874" Sep 30 10:10:51 crc kubenswrapper[4970]: I0930 10:10:51.885195 4970 scope.go:117] "RemoveContainer" containerID="80adca25dff044014a98debe787a64f4295607628a2f68a030f0a63963a9ea48" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.330039 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2s5xn"] Sep 30 10:10:57 crc kubenswrapper[4970]: E0930 10:10:57.333024 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="extract-utilities" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.333214 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="extract-utilities" Sep 30 10:10:57 crc kubenswrapper[4970]: E0930 10:10:57.333374 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="registry-server" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.333507 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="registry-server" Sep 30 10:10:57 crc kubenswrapper[4970]: E0930 10:10:57.333672 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="extract-content" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.333796 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="extract-content" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.334334 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfd4189-fdfb-41ce-b9ba-76c3e5353a80" containerName="registry-server" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.337598 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.351279 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2s5xn"] Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.383482 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-utilities\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.383588 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-catalog-content\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.383661 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7zf\" (UniqueName: \"kubernetes.io/projected/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-kube-api-access-lc7zf\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.485710 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-utilities\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.485780 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-catalog-content\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.485803 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc7zf\" (UniqueName: \"kubernetes.io/projected/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-kube-api-access-lc7zf\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.486277 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-utilities\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.486379 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-catalog-content\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.504697 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc7zf\" (UniqueName: \"kubernetes.io/projected/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-kube-api-access-lc7zf\") pod \"redhat-operators-2s5xn\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:57 crc kubenswrapper[4970]: I0930 10:10:57.672576 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:10:58 crc kubenswrapper[4970]: I0930 10:10:58.139155 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2s5xn"] Sep 30 10:10:58 crc kubenswrapper[4970]: I0930 10:10:58.654884 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerID="ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b" exitCode=0 Sep 30 10:10:58 crc kubenswrapper[4970]: I0930 10:10:58.654924 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s5xn" event={"ID":"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149","Type":"ContainerDied","Data":"ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b"} Sep 30 10:10:58 crc kubenswrapper[4970]: I0930 10:10:58.655253 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s5xn" event={"ID":"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149","Type":"ContainerStarted","Data":"a41c41592e695b8cd6c4ed2b4fa7c3de2eeaad15cf0e51e539cd5dfdb5247bb4"} Sep 30 10:11:00 crc kubenswrapper[4970]: I0930 10:11:00.677549 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerID="eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6" exitCode=0 Sep 30 10:11:00 crc kubenswrapper[4970]: I0930 10:11:00.677598 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s5xn" event={"ID":"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149","Type":"ContainerDied","Data":"eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6"} Sep 30 10:11:01 crc kubenswrapper[4970]: I0930 10:11:01.686030 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s5xn" event={"ID":"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149","Type":"ContainerStarted","Data":"98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad"} Sep 30 10:11:01 crc kubenswrapper[4970]: I0930 10:11:01.714309 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2s5xn" podStartSLOduration=2.27319752 podStartE2EDuration="4.714290738s" podCreationTimestamp="2025-09-30 10:10:57 +0000 UTC" firstStartedPulling="2025-09-30 10:10:58.657934015 +0000 UTC m=+1471.729784969" lastFinishedPulling="2025-09-30 10:11:01.099027253 +0000 UTC m=+1474.170878187" observedRunningTime="2025-09-30 10:11:01.71109729 +0000 UTC m=+1474.782948244" watchObservedRunningTime="2025-09-30 10:11:01.714290738 +0000 UTC m=+1474.786141662" Sep 30 10:11:04 crc kubenswrapper[4970]: I0930 10:11:04.820839 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:11:04 crc kubenswrapper[4970]: I0930 10:11:04.821681 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:11:04 crc kubenswrapper[4970]: I0930 10:11:04.821731 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:11:04 crc kubenswrapper[4970]: I0930 10:11:04.822377 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:11:04 crc kubenswrapper[4970]: I0930 10:11:04.822443 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913" gracePeriod=600 Sep 30 10:11:05 crc kubenswrapper[4970]: E0930 10:11:05.018904 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92198682_93fe_4b8a_8b03_bb768b56a129.slice/crio-conmon-211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92198682_93fe_4b8a_8b03_bb768b56a129.slice/crio-211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913.scope\": RecentStats: unable to find data in memory cache]" Sep 30 10:11:05 crc kubenswrapper[4970]: I0930 10:11:05.722527 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913" exitCode=0 Sep 30 10:11:05 crc kubenswrapper[4970]: I0930 10:11:05.722585 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913"} Sep 30 10:11:05 crc kubenswrapper[4970]: I0930 10:11:05.722888 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac"} Sep 30 10:11:05 crc kubenswrapper[4970]: I0930 10:11:05.722909 4970 scope.go:117] "RemoveContainer" containerID="39953ec91959e4f49ceb3ed4f3dfdbe1d7a2e025175d7df609efe49614e86013" Sep 30 10:11:07 crc kubenswrapper[4970]: I0930 10:11:07.679680 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:11:07 crc kubenswrapper[4970]: I0930 10:11:07.680128 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:11:07 crc kubenswrapper[4970]: I0930 10:11:07.739134 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:11:07 crc kubenswrapper[4970]: I0930 10:11:07.793870 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:11:07 crc kubenswrapper[4970]: I0930 10:11:07.979436 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2s5xn"] Sep 30 10:11:09 crc kubenswrapper[4970]: I0930 10:11:09.763559 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2s5xn" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="registry-server" containerID="cri-o://98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad" gracePeriod=2 Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.233461 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.322578 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-utilities\") pod \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.322718 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-catalog-content\") pod \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.322926 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lc7zf\" (UniqueName: \"kubernetes.io/projected/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-kube-api-access-lc7zf\") pod \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\" (UID: \"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149\") " Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.324418 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-utilities" (OuterVolumeSpecName: "utilities") pod "6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" (UID: "6e25aeb8-60ca-4a21-b6b7-37ccc37bd149"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.332042 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-kube-api-access-lc7zf" (OuterVolumeSpecName: "kube-api-access-lc7zf") pod "6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" (UID: "6e25aeb8-60ca-4a21-b6b7-37ccc37bd149"). InnerVolumeSpecName "kube-api-access-lc7zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.418606 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" (UID: "6e25aeb8-60ca-4a21-b6b7-37ccc37bd149"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.425008 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lc7zf\" (UniqueName: \"kubernetes.io/projected/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-kube-api-access-lc7zf\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.425039 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.425049 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.773197 4970 generic.go:334] "Generic (PLEG): container finished" podID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerID="98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad" exitCode=0 Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.773435 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s5xn" event={"ID":"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149","Type":"ContainerDied","Data":"98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad"} Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.773600 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2s5xn" event={"ID":"6e25aeb8-60ca-4a21-b6b7-37ccc37bd149","Type":"ContainerDied","Data":"a41c41592e695b8cd6c4ed2b4fa7c3de2eeaad15cf0e51e539cd5dfdb5247bb4"} Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.773622 4970 scope.go:117] "RemoveContainer" containerID="98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.773485 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2s5xn" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.825518 4970 scope.go:117] "RemoveContainer" containerID="eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.827750 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2s5xn"] Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.839057 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2s5xn"] Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.846805 4970 scope.go:117] "RemoveContainer" containerID="ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.888912 4970 scope.go:117] "RemoveContainer" containerID="98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad" Sep 30 10:11:10 crc kubenswrapper[4970]: E0930 10:11:10.890268 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad\": container with ID starting with 98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad not found: ID does not exist" containerID="98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.890310 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad"} err="failed to get container status \"98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad\": rpc error: code = NotFound desc = could not find container \"98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad\": container with ID starting with 98188d02e1f028fa850a70944a6099288d21264b7be8c7de7701d8f75a532aad not found: ID does not exist" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.890340 4970 scope.go:117] "RemoveContainer" containerID="eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6" Sep 30 10:11:10 crc kubenswrapper[4970]: E0930 10:11:10.890679 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6\": container with ID starting with eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6 not found: ID does not exist" containerID="eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.890716 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6"} err="failed to get container status \"eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6\": rpc error: code = NotFound desc = could not find container \"eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6\": container with ID starting with eaec4b9cfeb5612e7e3c6aa01faeabc029989fab1d2b60f7371461483fdb7ea6 not found: ID does not exist" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.890744 4970 scope.go:117] "RemoveContainer" containerID="ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b" Sep 30 10:11:10 crc kubenswrapper[4970]: E0930 10:11:10.891083 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b\": container with ID starting with ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b not found: ID does not exist" containerID="ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b" Sep 30 10:11:10 crc kubenswrapper[4970]: I0930 10:11:10.891108 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b"} err="failed to get container status \"ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b\": rpc error: code = NotFound desc = could not find container \"ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b\": container with ID starting with ed4ad79d663293ea15e7f6ed6c01c2741d76b4f9ca471e40c49251b3b9caa78b not found: ID does not exist" Sep 30 10:11:11 crc kubenswrapper[4970]: I0930 10:11:11.688837 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" path="/var/lib/kubelet/pods/6e25aeb8-60ca-4a21-b6b7-37ccc37bd149/volumes" Sep 30 10:11:14 crc kubenswrapper[4970]: I0930 10:11:14.829554 4970 generic.go:334] "Generic (PLEG): container finished" podID="f4b4ad42-77b6-450b-befa-9bb0012fe9ae" containerID="028347c01212cbd154ba4d630ab665722bbc2630a14a21ed125775d95754e366" exitCode=0 Sep 30 10:11:14 crc kubenswrapper[4970]: I0930 10:11:14.829649 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" event={"ID":"f4b4ad42-77b6-450b-befa-9bb0012fe9ae","Type":"ContainerDied","Data":"028347c01212cbd154ba4d630ab665722bbc2630a14a21ed125775d95754e366"} Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.252881 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.330327 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49d8t\" (UniqueName: \"kubernetes.io/projected/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-kube-api-access-49d8t\") pod \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.330371 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-bootstrap-combined-ca-bundle\") pod \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.330426 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-ssh-key\") pod \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.330497 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-inventory\") pod \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\" (UID: \"f4b4ad42-77b6-450b-befa-9bb0012fe9ae\") " Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.341731 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "f4b4ad42-77b6-450b-befa-9bb0012fe9ae" (UID: "f4b4ad42-77b6-450b-befa-9bb0012fe9ae"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.345032 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-kube-api-access-49d8t" (OuterVolumeSpecName: "kube-api-access-49d8t") pod "f4b4ad42-77b6-450b-befa-9bb0012fe9ae" (UID: "f4b4ad42-77b6-450b-befa-9bb0012fe9ae"). InnerVolumeSpecName "kube-api-access-49d8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.366915 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-inventory" (OuterVolumeSpecName: "inventory") pod "f4b4ad42-77b6-450b-befa-9bb0012fe9ae" (UID: "f4b4ad42-77b6-450b-befa-9bb0012fe9ae"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.369355 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f4b4ad42-77b6-450b-befa-9bb0012fe9ae" (UID: "f4b4ad42-77b6-450b-befa-9bb0012fe9ae"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.433144 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.433184 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49d8t\" (UniqueName: \"kubernetes.io/projected/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-kube-api-access-49d8t\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.433200 4970 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.433212 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f4b4ad42-77b6-450b-befa-9bb0012fe9ae-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.849612 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" event={"ID":"f4b4ad42-77b6-450b-befa-9bb0012fe9ae","Type":"ContainerDied","Data":"13c6b63e59595e59996808cae36dfda447d3404def33d40cadf4ad1ae6302b56"} Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.849917 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c6b63e59595e59996808cae36dfda447d3404def33d40cadf4ad1ae6302b56" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.849967 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.934333 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p"] Sep 30 10:11:16 crc kubenswrapper[4970]: E0930 10:11:16.934735 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b4ad42-77b6-450b-befa-9bb0012fe9ae" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.934751 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b4ad42-77b6-450b-befa-9bb0012fe9ae" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 30 10:11:16 crc kubenswrapper[4970]: E0930 10:11:16.934763 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="extract-utilities" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.934770 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="extract-utilities" Sep 30 10:11:16 crc kubenswrapper[4970]: E0930 10:11:16.934812 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="registry-server" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.934819 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="registry-server" Sep 30 10:11:16 crc kubenswrapper[4970]: E0930 10:11:16.934828 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="extract-content" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.934833 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="extract-content" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.935079 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e25aeb8-60ca-4a21-b6b7-37ccc37bd149" containerName="registry-server" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.935103 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b4ad42-77b6-450b-befa-9bb0012fe9ae" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.935708 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.938005 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.938225 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.938351 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.938420 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:11:16 crc kubenswrapper[4970]: I0930 10:11:16.946884 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p"] Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.043123 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.043434 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.043558 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqmr\" (UniqueName: \"kubernetes.io/projected/877ee61c-4abb-4daf-a42d-f2d26afbe137-kube-api-access-shqmr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.145625 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shqmr\" (UniqueName: \"kubernetes.io/projected/877ee61c-4abb-4daf-a42d-f2d26afbe137-kube-api-access-shqmr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.145797 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.145865 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.150413 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.151856 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.160826 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shqmr\" (UniqueName: \"kubernetes.io/projected/877ee61c-4abb-4daf-a42d-f2d26afbe137-kube-api-access-shqmr\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-77k7p\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.251966 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.825121 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p"] Sep 30 10:11:17 crc kubenswrapper[4970]: I0930 10:11:17.859622 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" event={"ID":"877ee61c-4abb-4daf-a42d-f2d26afbe137","Type":"ContainerStarted","Data":"406e874919a91d06596861a5862442ada9d755e54c802a9c9a81d96565709a21"} Sep 30 10:11:18 crc kubenswrapper[4970]: I0930 10:11:18.869937 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" event={"ID":"877ee61c-4abb-4daf-a42d-f2d26afbe137","Type":"ContainerStarted","Data":"cdcb098347146739812b5f0584ac081a0eac25c4023032a732ff88207465ad61"} Sep 30 10:11:18 crc kubenswrapper[4970]: I0930 10:11:18.918829 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" podStartSLOduration=2.494790483 podStartE2EDuration="2.918799046s" podCreationTimestamp="2025-09-30 10:11:16 +0000 UTC" firstStartedPulling="2025-09-30 10:11:17.833610982 +0000 UTC m=+1490.905461906" lastFinishedPulling="2025-09-30 10:11:18.257619535 +0000 UTC m=+1491.329470469" observedRunningTime="2025-09-30 10:11:18.900095676 +0000 UTC m=+1491.971946620" watchObservedRunningTime="2025-09-30 10:11:18.918799046 +0000 UTC m=+1491.990649980" Sep 30 10:11:51 crc kubenswrapper[4970]: I0930 10:11:51.986206 4970 scope.go:117] "RemoveContainer" containerID="e4ec6327b128effe3978c3cdd6a3300e2de8fb083ec9b519e7a4902398876f4c" Sep 30 10:11:52 crc kubenswrapper[4970]: I0930 10:11:52.019185 4970 scope.go:117] "RemoveContainer" containerID="a555b90baf26b9cef620793203b081544bce0b44e423317548638925cfadca3b" Sep 30 10:11:52 crc kubenswrapper[4970]: I0930 10:11:52.041957 4970 scope.go:117] "RemoveContainer" containerID="c0abc0449e9b19d1d393344e411e0d079dcd58bfb849bc523aac496d6f2e885d" Sep 30 10:12:14 crc kubenswrapper[4970]: I0930 10:12:14.036815 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-dbz5v"] Sep 30 10:12:14 crc kubenswrapper[4970]: I0930 10:12:14.046777 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-dbz5v"] Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.040032 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-rjrqv"] Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.064414 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-mn485"] Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.077877 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-rjrqv"] Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.089641 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-mn485"] Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.710124 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22f61c1e-f425-4745-ab1c-b93977f1152c" path="/var/lib/kubelet/pods/22f61c1e-f425-4745-ab1c-b93977f1152c/volumes" Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.712048 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2" path="/var/lib/kubelet/pods/35cfe6c2-e537-48d0-bb4f-ce375ea3c6f2/volumes" Sep 30 10:12:15 crc kubenswrapper[4970]: I0930 10:12:15.712829 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="576a687d-6e95-4703-829e-574a84a838dd" path="/var/lib/kubelet/pods/576a687d-6e95-4703-829e-574a84a838dd/volumes" Sep 30 10:12:24 crc kubenswrapper[4970]: I0930 10:12:24.033836 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-00fc-account-create-bwbx9"] Sep 30 10:12:24 crc kubenswrapper[4970]: I0930 10:12:24.044974 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-42b0-account-create-7fpbl"] Sep 30 10:12:24 crc kubenswrapper[4970]: I0930 10:12:24.054768 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-00fc-account-create-bwbx9"] Sep 30 10:12:24 crc kubenswrapper[4970]: I0930 10:12:24.062469 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-42b0-account-create-7fpbl"] Sep 30 10:12:25 crc kubenswrapper[4970]: I0930 10:12:25.687675 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ec35f86-ee7a-432a-a69f-3235fa387bf6" path="/var/lib/kubelet/pods/2ec35f86-ee7a-432a-a69f-3235fa387bf6/volumes" Sep 30 10:12:25 crc kubenswrapper[4970]: I0930 10:12:25.689146 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5387d0e5-d9fe-4032-95ae-c1d205e27e69" path="/var/lib/kubelet/pods/5387d0e5-d9fe-4032-95ae-c1d205e27e69/volumes" Sep 30 10:12:33 crc kubenswrapper[4970]: I0930 10:12:33.028370 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e136-account-create-p4txc"] Sep 30 10:12:33 crc kubenswrapper[4970]: I0930 10:12:33.037185 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e136-account-create-p4txc"] Sep 30 10:12:33 crc kubenswrapper[4970]: I0930 10:12:33.679935 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe24c936-58db-48a2-8cf5-ad1ee7473ef8" path="/var/lib/kubelet/pods/fe24c936-58db-48a2-8cf5-ad1ee7473ef8/volumes" Sep 30 10:12:42 crc kubenswrapper[4970]: I0930 10:12:42.036692 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-mkvnp"] Sep 30 10:12:42 crc kubenswrapper[4970]: I0930 10:12:42.047059 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-mkvnp"] Sep 30 10:12:43 crc kubenswrapper[4970]: I0930 10:12:43.033749 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-df4fk"] Sep 30 10:12:43 crc kubenswrapper[4970]: I0930 10:12:43.042126 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-df4fk"] Sep 30 10:12:43 crc kubenswrapper[4970]: I0930 10:12:43.685609 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21283459-5e37-4ebb-9f88-4ddfb5b3dc79" path="/var/lib/kubelet/pods/21283459-5e37-4ebb-9f88-4ddfb5b3dc79/volumes" Sep 30 10:12:43 crc kubenswrapper[4970]: I0930 10:12:43.686120 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e97f0e8-a17c-47b6-ae38-ac69404b9b01" path="/var/lib/kubelet/pods/2e97f0e8-a17c-47b6-ae38-ac69404b9b01/volumes" Sep 30 10:12:44 crc kubenswrapper[4970]: I0930 10:12:44.036286 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-cl74x"] Sep 30 10:12:44 crc kubenswrapper[4970]: I0930 10:12:44.048901 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-cl74x"] Sep 30 10:12:45 crc kubenswrapper[4970]: I0930 10:12:45.679070 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f26d3109-a89d-4787-899e-a370559a9f42" path="/var/lib/kubelet/pods/f26d3109-a89d-4787-899e-a370559a9f42/volumes" Sep 30 10:12:46 crc kubenswrapper[4970]: I0930 10:12:46.692452 4970 generic.go:334] "Generic (PLEG): container finished" podID="877ee61c-4abb-4daf-a42d-f2d26afbe137" containerID="cdcb098347146739812b5f0584ac081a0eac25c4023032a732ff88207465ad61" exitCode=0 Sep 30 10:12:46 crc kubenswrapper[4970]: I0930 10:12:46.692546 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" event={"ID":"877ee61c-4abb-4daf-a42d-f2d26afbe137","Type":"ContainerDied","Data":"cdcb098347146739812b5f0584ac081a0eac25c4023032a732ff88207465ad61"} Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.070181 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.138738 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shqmr\" (UniqueName: \"kubernetes.io/projected/877ee61c-4abb-4daf-a42d-f2d26afbe137-kube-api-access-shqmr\") pod \"877ee61c-4abb-4daf-a42d-f2d26afbe137\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.138889 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-ssh-key\") pod \"877ee61c-4abb-4daf-a42d-f2d26afbe137\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.139126 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-inventory\") pod \"877ee61c-4abb-4daf-a42d-f2d26afbe137\" (UID: \"877ee61c-4abb-4daf-a42d-f2d26afbe137\") " Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.144892 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877ee61c-4abb-4daf-a42d-f2d26afbe137-kube-api-access-shqmr" (OuterVolumeSpecName: "kube-api-access-shqmr") pod "877ee61c-4abb-4daf-a42d-f2d26afbe137" (UID: "877ee61c-4abb-4daf-a42d-f2d26afbe137"). InnerVolumeSpecName "kube-api-access-shqmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.170855 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "877ee61c-4abb-4daf-a42d-f2d26afbe137" (UID: "877ee61c-4abb-4daf-a42d-f2d26afbe137"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.170909 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-inventory" (OuterVolumeSpecName: "inventory") pod "877ee61c-4abb-4daf-a42d-f2d26afbe137" (UID: "877ee61c-4abb-4daf-a42d-f2d26afbe137"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.242060 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shqmr\" (UniqueName: \"kubernetes.io/projected/877ee61c-4abb-4daf-a42d-f2d26afbe137-kube-api-access-shqmr\") on node \"crc\" DevicePath \"\"" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.242111 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.242140 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/877ee61c-4abb-4daf-a42d-f2d26afbe137-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.711186 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" event={"ID":"877ee61c-4abb-4daf-a42d-f2d26afbe137","Type":"ContainerDied","Data":"406e874919a91d06596861a5862442ada9d755e54c802a9c9a81d96565709a21"} Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.711458 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="406e874919a91d06596861a5862442ada9d755e54c802a9c9a81d96565709a21" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.711251 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-77k7p" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.783491 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd"] Sep 30 10:12:48 crc kubenswrapper[4970]: E0930 10:12:48.783938 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877ee61c-4abb-4daf-a42d-f2d26afbe137" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.783959 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="877ee61c-4abb-4daf-a42d-f2d26afbe137" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.784227 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="877ee61c-4abb-4daf-a42d-f2d26afbe137" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.784913 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.788159 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.788257 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.788265 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.788432 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.799940 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd"] Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.854741 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.854795 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.854839 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8vkj\" (UniqueName: \"kubernetes.io/projected/d70e09c1-47df-4742-b2fc-77c354169b46-kube-api-access-s8vkj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.956616 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.956692 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.956747 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8vkj\" (UniqueName: \"kubernetes.io/projected/d70e09c1-47df-4742-b2fc-77c354169b46-kube-api-access-s8vkj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.964841 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.967396 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:48 crc kubenswrapper[4970]: I0930 10:12:48.991919 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8vkj\" (UniqueName: \"kubernetes.io/projected/d70e09c1-47df-4742-b2fc-77c354169b46-kube-api-access-s8vkj\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:49 crc kubenswrapper[4970]: I0930 10:12:49.100516 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:12:49 crc kubenswrapper[4970]: I0930 10:12:49.622536 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:12:49 crc kubenswrapper[4970]: I0930 10:12:49.626037 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd"] Sep 30 10:12:49 crc kubenswrapper[4970]: I0930 10:12:49.722683 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" event={"ID":"d70e09c1-47df-4742-b2fc-77c354169b46","Type":"ContainerStarted","Data":"9a7693d47d4f1a5af279beb646ec89e79adf7b86616f6bbe928006c2a904809c"} Sep 30 10:12:50 crc kubenswrapper[4970]: I0930 10:12:50.734144 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" event={"ID":"d70e09c1-47df-4742-b2fc-77c354169b46","Type":"ContainerStarted","Data":"bc3db50d78edf9822f99f88f573c8fb6f6a6e693aed0d95df10accd5c0518130"} Sep 30 10:12:50 crc kubenswrapper[4970]: I0930 10:12:50.749334 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" podStartSLOduration=2.20621959 podStartE2EDuration="2.749318491s" podCreationTimestamp="2025-09-30 10:12:48 +0000 UTC" firstStartedPulling="2025-09-30 10:12:49.622218983 +0000 UTC m=+1582.694069937" lastFinishedPulling="2025-09-30 10:12:50.165317904 +0000 UTC m=+1583.237168838" observedRunningTime="2025-09-30 10:12:50.747146961 +0000 UTC m=+1583.818997895" watchObservedRunningTime="2025-09-30 10:12:50.749318491 +0000 UTC m=+1583.821169425" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.059280 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6710-account-create-qf44n"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.078802 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-s8gsl"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.087871 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ksbl9"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.094759 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-b35f-account-create-bdr9x"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.102156 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6710-account-create-qf44n"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.109419 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ksbl9"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.116582 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-s8gsl"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.132367 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-b35f-account-create-bdr9x"] Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.150611 4970 scope.go:117] "RemoveContainer" containerID="2ee88dfa499ba208108ed7624b0017f21883521e745056e02d4b216e4ee44051" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.174013 4970 scope.go:117] "RemoveContainer" containerID="b6e72d7381a26467ffd4101687b0c1c934d894b2036581bbe57f243de287752b" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.236966 4970 scope.go:117] "RemoveContainer" containerID="79b4aac52892d5bab75e3c10b191238a6692f41e1c7e6fd8a65eec12b09e55b5" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.283088 4970 scope.go:117] "RemoveContainer" containerID="ae40f9f84f1166ef4ce5412b8c3ed40d2d3a7e73b3fddc6858a79cf0e7a8b976" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.304442 4970 scope.go:117] "RemoveContainer" containerID="720aa6e7631f5dccbf367dee0c014a88e6aeb315d2828c6f16827b2c26d278e8" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.358333 4970 scope.go:117] "RemoveContainer" containerID="d7432d097ba59dae17e86b9d0fbed9165674ecca90ef48a794c302207a8022c1" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.377841 4970 scope.go:117] "RemoveContainer" containerID="f807d9238ae07d6e88f0cd0258c577a9816b171c5ee5ea4fd697289456c3c3cf" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.432121 4970 scope.go:117] "RemoveContainer" containerID="50c1dc7707fc97ae3ab745c806d73cca2ec9b6fe0b80b07b3450867cce49f4d6" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.482290 4970 scope.go:117] "RemoveContainer" containerID="f1eda0329c22c2f97835dff741ea11ecafb21872b36df65f37179b7b9e34794b" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.499242 4970 scope.go:117] "RemoveContainer" containerID="9c6eccdb0024e6a87f62c0ee478400eefc0e593c775c92f1ee7b80f5ce3e2fc5" Sep 30 10:12:52 crc kubenswrapper[4970]: I0930 10:12:52.519046 4970 scope.go:117] "RemoveContainer" containerID="aa433e908051c68065de335091661717af966d385d6ae0ca6c967e0a4e2f32ac" Sep 30 10:12:53 crc kubenswrapper[4970]: I0930 10:12:53.687250 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f09121d-1a78-4a2a-90ca-90aa15067ebf" path="/var/lib/kubelet/pods/0f09121d-1a78-4a2a-90ca-90aa15067ebf/volumes" Sep 30 10:12:53 crc kubenswrapper[4970]: I0930 10:12:53.688058 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1" path="/var/lib/kubelet/pods/26f06d30-5f1f-43a9-94fd-1eca1e5ee0c1/volumes" Sep 30 10:12:53 crc kubenswrapper[4970]: I0930 10:12:53.688808 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a77c6408-e115-4a90-bba3-2d5c64f2c8a3" path="/var/lib/kubelet/pods/a77c6408-e115-4a90-bba3-2d5c64f2c8a3/volumes" Sep 30 10:12:53 crc kubenswrapper[4970]: I0930 10:12:53.689729 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3705523-7482-4c42-a7e6-9c2081ea7ce5" path="/var/lib/kubelet/pods/c3705523-7482-4c42-a7e6-9c2081ea7ce5/volumes" Sep 30 10:13:23 crc kubenswrapper[4970]: I0930 10:13:23.051767 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d622-account-create-7hnv9"] Sep 30 10:13:23 crc kubenswrapper[4970]: I0930 10:13:23.061042 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d622-account-create-7hnv9"] Sep 30 10:13:23 crc kubenswrapper[4970]: I0930 10:13:23.686709 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cbb3863-af8c-4cba-9557-a02d280dd1c7" path="/var/lib/kubelet/pods/2cbb3863-af8c-4cba-9557-a02d280dd1c7/volumes" Sep 30 10:13:34 crc kubenswrapper[4970]: I0930 10:13:34.821572 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:13:34 crc kubenswrapper[4970]: I0930 10:13:34.822247 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:13:37 crc kubenswrapper[4970]: I0930 10:13:37.028969 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lhrsc"] Sep 30 10:13:37 crc kubenswrapper[4970]: I0930 10:13:37.039290 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lhrsc"] Sep 30 10:13:37 crc kubenswrapper[4970]: I0930 10:13:37.681248 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e80a6b3-9edf-4984-b37c-80940382be1e" path="/var/lib/kubelet/pods/1e80a6b3-9edf-4984-b37c-80940382be1e/volumes" Sep 30 10:13:45 crc kubenswrapper[4970]: I0930 10:13:45.045434 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-zdzz8"] Sep 30 10:13:45 crc kubenswrapper[4970]: I0930 10:13:45.052208 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-2n6k8"] Sep 30 10:13:45 crc kubenswrapper[4970]: I0930 10:13:45.060749 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-2n6k8"] Sep 30 10:13:45 crc kubenswrapper[4970]: I0930 10:13:45.068754 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-zdzz8"] Sep 30 10:13:45 crc kubenswrapper[4970]: I0930 10:13:45.678903 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d62c32c-2c57-455b-9d92-6add27e33831" path="/var/lib/kubelet/pods/3d62c32c-2c57-455b-9d92-6add27e33831/volumes" Sep 30 10:13:45 crc kubenswrapper[4970]: I0930 10:13:45.680067 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea9f861-9877-480f-a490-08c80d2580cf" path="/var/lib/kubelet/pods/6ea9f861-9877-480f-a490-08c80d2580cf/volumes" Sep 30 10:13:50 crc kubenswrapper[4970]: I0930 10:13:50.044724 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4pw5n"] Sep 30 10:13:50 crc kubenswrapper[4970]: I0930 10:13:50.059157 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4pw5n"] Sep 30 10:13:51 crc kubenswrapper[4970]: I0930 10:13:51.679235 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad24190f-4eb6-49c8-bad6-c33a817cd9c6" path="/var/lib/kubelet/pods/ad24190f-4eb6-49c8-bad6-c33a817cd9c6/volumes" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.716034 4970 scope.go:117] "RemoveContainer" containerID="9fba5b9c2a718fe672382c81c7c52482e137a92aa07efbc8e15a28daa24e5276" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.742073 4970 scope.go:117] "RemoveContainer" containerID="48bf0db9c01b8ebefa87fdb71af9c0ba8a31ce3769f6800a86035d3f4b1f3661" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.812412 4970 scope.go:117] "RemoveContainer" containerID="4e91bd20cab6975f801c4ccaa9a7710e07f086b7c1118ecd7aa792569c099b37" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.849506 4970 scope.go:117] "RemoveContainer" containerID="e0d7235bb5495f77135975159a7f12f227ac7f50760380a0f9ed5bc15016aa17" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.888558 4970 scope.go:117] "RemoveContainer" containerID="808e41ea9a9e63232613721e4540c3c56be36b18911a2bc8ea8c6f421dcefc8e" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.936607 4970 scope.go:117] "RemoveContainer" containerID="ae088867f7339eb9f50393931a94de209e64d3000981e959cd19fcd5a9b19886" Sep 30 10:13:52 crc kubenswrapper[4970]: I0930 10:13:52.969529 4970 scope.go:117] "RemoveContainer" containerID="1084cb733673f7ccfee88310d49c49131ab929fb6ac87c9df6ba6704eabce6b3" Sep 30 10:13:53 crc kubenswrapper[4970]: I0930 10:13:53.011117 4970 scope.go:117] "RemoveContainer" containerID="3b436dc00ecc5c6e68a4213249dade95ef6f177bfa38415ce1ce9e08a5bc41d0" Sep 30 10:13:53 crc kubenswrapper[4970]: I0930 10:13:53.043733 4970 scope.go:117] "RemoveContainer" containerID="c6c4bda9e70ba250112ef57d0fd1dae3c1436a2b633eeb1c3bdc3b5a76c999bc" Sep 30 10:13:57 crc kubenswrapper[4970]: I0930 10:13:57.026104 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ll2vt"] Sep 30 10:13:57 crc kubenswrapper[4970]: I0930 10:13:57.037308 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ll2vt"] Sep 30 10:13:57 crc kubenswrapper[4970]: I0930 10:13:57.690047 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab2e1257-9947-41e6-8c2e-a366f4ea4c47" path="/var/lib/kubelet/pods/ab2e1257-9947-41e6-8c2e-a366f4ea4c47/volumes" Sep 30 10:14:04 crc kubenswrapper[4970]: I0930 10:14:04.504957 4970 generic.go:334] "Generic (PLEG): container finished" podID="d70e09c1-47df-4742-b2fc-77c354169b46" containerID="bc3db50d78edf9822f99f88f573c8fb6f6a6e693aed0d95df10accd5c0518130" exitCode=0 Sep 30 10:14:04 crc kubenswrapper[4970]: I0930 10:14:04.505027 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" event={"ID":"d70e09c1-47df-4742-b2fc-77c354169b46","Type":"ContainerDied","Data":"bc3db50d78edf9822f99f88f573c8fb6f6a6e693aed0d95df10accd5c0518130"} Sep 30 10:14:04 crc kubenswrapper[4970]: I0930 10:14:04.821884 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:14:04 crc kubenswrapper[4970]: I0930 10:14:04.822212 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:14:05 crc kubenswrapper[4970]: I0930 10:14:05.962738 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.081919 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-inventory\") pod \"d70e09c1-47df-4742-b2fc-77c354169b46\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.082007 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-ssh-key\") pod \"d70e09c1-47df-4742-b2fc-77c354169b46\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.082117 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8vkj\" (UniqueName: \"kubernetes.io/projected/d70e09c1-47df-4742-b2fc-77c354169b46-kube-api-access-s8vkj\") pod \"d70e09c1-47df-4742-b2fc-77c354169b46\" (UID: \"d70e09c1-47df-4742-b2fc-77c354169b46\") " Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.091760 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d70e09c1-47df-4742-b2fc-77c354169b46-kube-api-access-s8vkj" (OuterVolumeSpecName: "kube-api-access-s8vkj") pod "d70e09c1-47df-4742-b2fc-77c354169b46" (UID: "d70e09c1-47df-4742-b2fc-77c354169b46"). InnerVolumeSpecName "kube-api-access-s8vkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.115181 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-inventory" (OuterVolumeSpecName: "inventory") pod "d70e09c1-47df-4742-b2fc-77c354169b46" (UID: "d70e09c1-47df-4742-b2fc-77c354169b46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.127397 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d70e09c1-47df-4742-b2fc-77c354169b46" (UID: "d70e09c1-47df-4742-b2fc-77c354169b46"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.187876 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.187966 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d70e09c1-47df-4742-b2fc-77c354169b46-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.188038 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8vkj\" (UniqueName: \"kubernetes.io/projected/d70e09c1-47df-4742-b2fc-77c354169b46-kube-api-access-s8vkj\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.527553 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" event={"ID":"d70e09c1-47df-4742-b2fc-77c354169b46","Type":"ContainerDied","Data":"9a7693d47d4f1a5af279beb646ec89e79adf7b86616f6bbe928006c2a904809c"} Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.527615 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a7693d47d4f1a5af279beb646ec89e79adf7b86616f6bbe928006c2a904809c" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.527631 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.621116 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v"] Sep 30 10:14:06 crc kubenswrapper[4970]: E0930 10:14:06.621557 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d70e09c1-47df-4742-b2fc-77c354169b46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.621576 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d70e09c1-47df-4742-b2fc-77c354169b46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.621786 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d70e09c1-47df-4742-b2fc-77c354169b46" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.622511 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.624115 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.624336 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.624392 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.624775 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.628834 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v"] Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.800949 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jslnh\" (UniqueName: \"kubernetes.io/projected/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-kube-api-access-jslnh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.801474 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.801698 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.903513 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jslnh\" (UniqueName: \"kubernetes.io/projected/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-kube-api-access-jslnh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.903592 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.903684 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.907768 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.909344 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.928313 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jslnh\" (UniqueName: \"kubernetes.io/projected/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-kube-api-access-jslnh\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-nf84v\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:06 crc kubenswrapper[4970]: I0930 10:14:06.938137 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:07 crc kubenswrapper[4970]: I0930 10:14:07.460291 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v"] Sep 30 10:14:07 crc kubenswrapper[4970]: I0930 10:14:07.537534 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" event={"ID":"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5","Type":"ContainerStarted","Data":"858e25caf9034abeac099301bdd63c50d82610e455246e92107aba2b7fe25723"} Sep 30 10:14:08 crc kubenswrapper[4970]: I0930 10:14:08.545851 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" event={"ID":"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5","Type":"ContainerStarted","Data":"255aa194d92e7fc6926f352717677707237807a357539304e8e769ed55eb23c7"} Sep 30 10:14:08 crc kubenswrapper[4970]: I0930 10:14:08.568474 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" podStartSLOduration=2.002545778 podStartE2EDuration="2.568454715s" podCreationTimestamp="2025-09-30 10:14:06 +0000 UTC" firstStartedPulling="2025-09-30 10:14:07.468524912 +0000 UTC m=+1660.540375846" lastFinishedPulling="2025-09-30 10:14:08.034433849 +0000 UTC m=+1661.106284783" observedRunningTime="2025-09-30 10:14:08.560690142 +0000 UTC m=+1661.632541086" watchObservedRunningTime="2025-09-30 10:14:08.568454715 +0000 UTC m=+1661.640305659" Sep 30 10:14:13 crc kubenswrapper[4970]: I0930 10:14:13.607965 4970 generic.go:334] "Generic (PLEG): container finished" podID="65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" containerID="255aa194d92e7fc6926f352717677707237807a357539304e8e769ed55eb23c7" exitCode=0 Sep 30 10:14:13 crc kubenswrapper[4970]: I0930 10:14:13.608062 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" event={"ID":"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5","Type":"ContainerDied","Data":"255aa194d92e7fc6926f352717677707237807a357539304e8e769ed55eb23c7"} Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.062947 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.160275 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-ssh-key\") pod \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.160382 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-inventory\") pod \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.160604 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jslnh\" (UniqueName: \"kubernetes.io/projected/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-kube-api-access-jslnh\") pod \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\" (UID: \"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5\") " Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.167059 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-kube-api-access-jslnh" (OuterVolumeSpecName: "kube-api-access-jslnh") pod "65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" (UID: "65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5"). InnerVolumeSpecName "kube-api-access-jslnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.187783 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-inventory" (OuterVolumeSpecName: "inventory") pod "65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" (UID: "65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.193027 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" (UID: "65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.262880 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.262926 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jslnh\" (UniqueName: \"kubernetes.io/projected/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-kube-api-access-jslnh\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.262942 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.628638 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" event={"ID":"65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5","Type":"ContainerDied","Data":"858e25caf9034abeac099301bdd63c50d82610e455246e92107aba2b7fe25723"} Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.628962 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="858e25caf9034abeac099301bdd63c50d82610e455246e92107aba2b7fe25723" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.628693 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-nf84v" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.729276 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr"] Sep 30 10:14:15 crc kubenswrapper[4970]: E0930 10:14:15.729795 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.729819 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.730070 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.730978 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.733529 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.733769 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.733918 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.734101 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.737906 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr"] Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.874308 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xnw4\" (UniqueName: \"kubernetes.io/projected/2f87be4d-7eec-4133-a07f-4cbe2b88548f-kube-api-access-8xnw4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.874644 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.874774 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.976796 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xnw4\" (UniqueName: \"kubernetes.io/projected/2f87be4d-7eec-4133-a07f-4cbe2b88548f-kube-api-access-8xnw4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.976843 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.976991 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.980518 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:15 crc kubenswrapper[4970]: I0930 10:14:15.988506 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:16 crc kubenswrapper[4970]: I0930 10:14:16.005797 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xnw4\" (UniqueName: \"kubernetes.io/projected/2f87be4d-7eec-4133-a07f-4cbe2b88548f-kube-api-access-8xnw4\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-kl5qr\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:16 crc kubenswrapper[4970]: I0930 10:14:16.090140 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:16 crc kubenswrapper[4970]: I0930 10:14:16.617821 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr"] Sep 30 10:14:16 crc kubenswrapper[4970]: I0930 10:14:16.639199 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" event={"ID":"2f87be4d-7eec-4133-a07f-4cbe2b88548f","Type":"ContainerStarted","Data":"dedc5a94879d78ac07df4b94396689da4d741ef9cca49fe8aa68f892e70de5cf"} Sep 30 10:14:17 crc kubenswrapper[4970]: I0930 10:14:17.652821 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" event={"ID":"2f87be4d-7eec-4133-a07f-4cbe2b88548f","Type":"ContainerStarted","Data":"68fd8742c4413726618762cfca9f869fd13a8a32fdc99878a42cba7391396c02"} Sep 30 10:14:17 crc kubenswrapper[4970]: I0930 10:14:17.692672 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" podStartSLOduration=2.297314386 podStartE2EDuration="2.692638328s" podCreationTimestamp="2025-09-30 10:14:15 +0000 UTC" firstStartedPulling="2025-09-30 10:14:16.620883687 +0000 UTC m=+1669.692734621" lastFinishedPulling="2025-09-30 10:14:17.016207639 +0000 UTC m=+1670.088058563" observedRunningTime="2025-09-30 10:14:17.670441381 +0000 UTC m=+1670.742292315" watchObservedRunningTime="2025-09-30 10:14:17.692638328 +0000 UTC m=+1670.764489292" Sep 30 10:14:19 crc kubenswrapper[4970]: I0930 10:14:19.045151 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vkx9z"] Sep 30 10:14:19 crc kubenswrapper[4970]: I0930 10:14:19.056510 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vkx9z"] Sep 30 10:14:19 crc kubenswrapper[4970]: I0930 10:14:19.682287 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9766459a-e1c1-48a0-a45c-bda24281c6d6" path="/var/lib/kubelet/pods/9766459a-e1c1-48a0-a45c-bda24281c6d6/volumes" Sep 30 10:14:26 crc kubenswrapper[4970]: I0930 10:14:26.029081 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-vpsk6"] Sep 30 10:14:26 crc kubenswrapper[4970]: I0930 10:14:26.040370 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-lrs4s"] Sep 30 10:14:26 crc kubenswrapper[4970]: I0930 10:14:26.049844 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-vpsk6"] Sep 30 10:14:26 crc kubenswrapper[4970]: I0930 10:14:26.059674 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-lrs4s"] Sep 30 10:14:27 crc kubenswrapper[4970]: I0930 10:14:27.698452 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0881a1bd-fe20-475f-9cf8-9869a3c11344" path="/var/lib/kubelet/pods/0881a1bd-fe20-475f-9cf8-9869a3c11344/volumes" Sep 30 10:14:27 crc kubenswrapper[4970]: I0930 10:14:27.700533 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f300b93d-68bd-45af-a07a-4dcd57af3f00" path="/var/lib/kubelet/pods/f300b93d-68bd-45af-a07a-4dcd57af3f00/volumes" Sep 30 10:14:29 crc kubenswrapper[4970]: I0930 10:14:29.031764 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-03f9-account-create-wj49d"] Sep 30 10:14:29 crc kubenswrapper[4970]: I0930 10:14:29.040142 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-03f9-account-create-wj49d"] Sep 30 10:14:29 crc kubenswrapper[4970]: I0930 10:14:29.680064 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="904d5a82-7c31-44b2-b222-58009f2f53ea" path="/var/lib/kubelet/pods/904d5a82-7c31-44b2-b222-58009f2f53ea/volumes" Sep 30 10:14:34 crc kubenswrapper[4970]: I0930 10:14:34.821438 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:14:34 crc kubenswrapper[4970]: I0930 10:14:34.822330 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:14:34 crc kubenswrapper[4970]: I0930 10:14:34.822401 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:14:34 crc kubenswrapper[4970]: I0930 10:14:34.823639 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:14:34 crc kubenswrapper[4970]: I0930 10:14:34.823768 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" gracePeriod=600 Sep 30 10:14:34 crc kubenswrapper[4970]: E0930 10:14:34.976168 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:14:35 crc kubenswrapper[4970]: I0930 10:14:35.868978 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" exitCode=0 Sep 30 10:14:35 crc kubenswrapper[4970]: I0930 10:14:35.869080 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac"} Sep 30 10:14:35 crc kubenswrapper[4970]: I0930 10:14:35.869335 4970 scope.go:117] "RemoveContainer" containerID="211775f0ae6ad2cf1245081565b8e3cfb1184d576eabfc89da9e31fdd3771913" Sep 30 10:14:35 crc kubenswrapper[4970]: I0930 10:14:35.869728 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:14:35 crc kubenswrapper[4970]: E0930 10:14:35.869955 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:14:39 crc kubenswrapper[4970]: I0930 10:14:39.036462 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-7fb2-account-create-t9tmn"] Sep 30 10:14:39 crc kubenswrapper[4970]: I0930 10:14:39.046377 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-7fb2-account-create-t9tmn"] Sep 30 10:14:39 crc kubenswrapper[4970]: I0930 10:14:39.688782 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59187562-fc0e-49ac-b22d-81abc6850bd7" path="/var/lib/kubelet/pods/59187562-fc0e-49ac-b22d-81abc6850bd7/volumes" Sep 30 10:14:40 crc kubenswrapper[4970]: I0930 10:14:40.026397 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-91c3-account-create-f6wvn"] Sep 30 10:14:40 crc kubenswrapper[4970]: I0930 10:14:40.036603 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-91c3-account-create-f6wvn"] Sep 30 10:14:41 crc kubenswrapper[4970]: I0930 10:14:41.682694 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6c12a73-2619-429d-9ef3-2a7a25cc0906" path="/var/lib/kubelet/pods/f6c12a73-2619-429d-9ef3-2a7a25cc0906/volumes" Sep 30 10:14:49 crc kubenswrapper[4970]: I0930 10:14:49.669261 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:14:49 crc kubenswrapper[4970]: E0930 10:14:49.670319 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.061298 4970 generic.go:334] "Generic (PLEG): container finished" podID="2f87be4d-7eec-4133-a07f-4cbe2b88548f" containerID="68fd8742c4413726618762cfca9f869fd13a8a32fdc99878a42cba7391396c02" exitCode=0 Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.061430 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" event={"ID":"2f87be4d-7eec-4133-a07f-4cbe2b88548f","Type":"ContainerDied","Data":"68fd8742c4413726618762cfca9f869fd13a8a32fdc99878a42cba7391396c02"} Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.197574 4970 scope.go:117] "RemoveContainer" containerID="214925743b47809ebb0e3c09c776ecd30126704a55bd0862df1d1b312c361c57" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.263841 4970 scope.go:117] "RemoveContainer" containerID="62220ee2152481ebb859c99856a6bd8932a403e9d475ea15239dcc19ffbb9884" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.288636 4970 scope.go:117] "RemoveContainer" containerID="4b788ce585a535ed51047893a29ec7952437e6c58df74da4ccee06bc2091daf9" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.351539 4970 scope.go:117] "RemoveContainer" containerID="bb32fcdd0f48738a6fe2f819a80e303788ca253381b61529c98f614911741314" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.401598 4970 scope.go:117] "RemoveContainer" containerID="3d42c78a809f5b69841ce96521ceb6ca09c3a19e72afb66cc9ec4aaa31bc574e" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.448212 4970 scope.go:117] "RemoveContainer" containerID="4b7f31b37957c7407e7f3cf148c36b560bf5432f0397f407b716f0c16365645e" Sep 30 10:14:53 crc kubenswrapper[4970]: I0930 10:14:53.497003 4970 scope.go:117] "RemoveContainer" containerID="ebfcb695f4d298729bb91fb24f13d4a986cc5758849f62350c775c3d08c7bfa7" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.484008 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.588214 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xnw4\" (UniqueName: \"kubernetes.io/projected/2f87be4d-7eec-4133-a07f-4cbe2b88548f-kube-api-access-8xnw4\") pod \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.588277 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-inventory\") pod \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.588339 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-ssh-key\") pod \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\" (UID: \"2f87be4d-7eec-4133-a07f-4cbe2b88548f\") " Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.593494 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f87be4d-7eec-4133-a07f-4cbe2b88548f-kube-api-access-8xnw4" (OuterVolumeSpecName: "kube-api-access-8xnw4") pod "2f87be4d-7eec-4133-a07f-4cbe2b88548f" (UID: "2f87be4d-7eec-4133-a07f-4cbe2b88548f"). InnerVolumeSpecName "kube-api-access-8xnw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.614275 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f87be4d-7eec-4133-a07f-4cbe2b88548f" (UID: "2f87be4d-7eec-4133-a07f-4cbe2b88548f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.616659 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-inventory" (OuterVolumeSpecName: "inventory") pod "2f87be4d-7eec-4133-a07f-4cbe2b88548f" (UID: "2f87be4d-7eec-4133-a07f-4cbe2b88548f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.690863 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xnw4\" (UniqueName: \"kubernetes.io/projected/2f87be4d-7eec-4133-a07f-4cbe2b88548f-kube-api-access-8xnw4\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.690903 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:54 crc kubenswrapper[4970]: I0930 10:14:54.690912 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f87be4d-7eec-4133-a07f-4cbe2b88548f-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.084612 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" event={"ID":"2f87be4d-7eec-4133-a07f-4cbe2b88548f","Type":"ContainerDied","Data":"dedc5a94879d78ac07df4b94396689da4d741ef9cca49fe8aa68f892e70de5cf"} Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.085226 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dedc5a94879d78ac07df4b94396689da4d741ef9cca49fe8aa68f892e70de5cf" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.085323 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-kl5qr" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.233029 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r"] Sep 30 10:14:55 crc kubenswrapper[4970]: E0930 10:14:55.233510 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f87be4d-7eec-4133-a07f-4cbe2b88548f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.233534 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f87be4d-7eec-4133-a07f-4cbe2b88548f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.233764 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f87be4d-7eec-4133-a07f-4cbe2b88548f" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.236355 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.238772 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.239070 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.240016 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.240934 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r"] Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.242208 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.302299 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42gj5\" (UniqueName: \"kubernetes.io/projected/039233e2-0b03-4514-b359-5552e4d09ffc-kube-api-access-42gj5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.302373 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.302451 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.404544 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42gj5\" (UniqueName: \"kubernetes.io/projected/039233e2-0b03-4514-b359-5552e4d09ffc-kube-api-access-42gj5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.404663 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.404759 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.420105 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.421172 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.428443 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42gj5\" (UniqueName: \"kubernetes.io/projected/039233e2-0b03-4514-b359-5552e4d09ffc-kube-api-access-42gj5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:55 crc kubenswrapper[4970]: I0930 10:14:55.565441 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:14:56 crc kubenswrapper[4970]: I0930 10:14:56.199155 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r"] Sep 30 10:14:56 crc kubenswrapper[4970]: W0930 10:14:56.200416 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod039233e2_0b03_4514_b359_5552e4d09ffc.slice/crio-e1551cdde06035b718b9e2f1ef1b5fa84bb1e4e068e12d874a7115401ae83962 WatchSource:0}: Error finding container e1551cdde06035b718b9e2f1ef1b5fa84bb1e4e068e12d874a7115401ae83962: Status 404 returned error can't find the container with id e1551cdde06035b718b9e2f1ef1b5fa84bb1e4e068e12d874a7115401ae83962 Sep 30 10:14:57 crc kubenswrapper[4970]: I0930 10:14:57.102331 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" event={"ID":"039233e2-0b03-4514-b359-5552e4d09ffc","Type":"ContainerStarted","Data":"7b823b260d70298975652f629363b09419ada5c394ad129b81f3bd283e9f9a8b"} Sep 30 10:14:57 crc kubenswrapper[4970]: I0930 10:14:57.102636 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" event={"ID":"039233e2-0b03-4514-b359-5552e4d09ffc","Type":"ContainerStarted","Data":"e1551cdde06035b718b9e2f1ef1b5fa84bb1e4e068e12d874a7115401ae83962"} Sep 30 10:14:57 crc kubenswrapper[4970]: I0930 10:14:57.119077 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" podStartSLOduration=1.638055489 podStartE2EDuration="2.119062444s" podCreationTimestamp="2025-09-30 10:14:55 +0000 UTC" firstStartedPulling="2025-09-30 10:14:56.203617307 +0000 UTC m=+1709.275468251" lastFinishedPulling="2025-09-30 10:14:56.684624262 +0000 UTC m=+1709.756475206" observedRunningTime="2025-09-30 10:14:57.116426002 +0000 UTC m=+1710.188276936" watchObservedRunningTime="2025-09-30 10:14:57.119062444 +0000 UTC m=+1710.190913378" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.143106 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s"] Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.144719 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.149720 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.150099 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.157274 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s"] Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.219477 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a99a4f-0e77-4cae-9c73-613144df96e0-config-volume\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.219610 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a99a4f-0e77-4cae-9c73-613144df96e0-secret-volume\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.219809 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9brw\" (UniqueName: \"kubernetes.io/projected/75a99a4f-0e77-4cae-9c73-613144df96e0-kube-api-access-p9brw\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.322509 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a99a4f-0e77-4cae-9c73-613144df96e0-config-volume\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.322943 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a99a4f-0e77-4cae-9c73-613144df96e0-secret-volume\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.323033 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9brw\" (UniqueName: \"kubernetes.io/projected/75a99a4f-0e77-4cae-9c73-613144df96e0-kube-api-access-p9brw\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.324477 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a99a4f-0e77-4cae-9c73-613144df96e0-config-volume\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.334785 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a99a4f-0e77-4cae-9c73-613144df96e0-secret-volume\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.368226 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9brw\" (UniqueName: \"kubernetes.io/projected/75a99a4f-0e77-4cae-9c73-613144df96e0-kube-api-access-p9brw\") pod \"collect-profiles-29320455-dqd2s\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.466657 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:00 crc kubenswrapper[4970]: I0930 10:15:00.987609 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s"] Sep 30 10:15:01 crc kubenswrapper[4970]: I0930 10:15:01.143814 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" event={"ID":"75a99a4f-0e77-4cae-9c73-613144df96e0","Type":"ContainerStarted","Data":"6e0a79a6b42ba15169a0bbd323c32fec10f0722b9289000ac0e42790e686ab1b"} Sep 30 10:15:01 crc kubenswrapper[4970]: I0930 10:15:01.164206 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" podStartSLOduration=1.164176267 podStartE2EDuration="1.164176267s" podCreationTimestamp="2025-09-30 10:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:15:01.155810739 +0000 UTC m=+1714.227661713" watchObservedRunningTime="2025-09-30 10:15:01.164176267 +0000 UTC m=+1714.236027241" Sep 30 10:15:02 crc kubenswrapper[4970]: I0930 10:15:02.154657 4970 generic.go:334] "Generic (PLEG): container finished" podID="75a99a4f-0e77-4cae-9c73-613144df96e0" containerID="d0014e27ac2c93eaea201de7b1c8f872ae79cccc4d9cc5c7e9e6337ff8a087dc" exitCode=0 Sep 30 10:15:02 crc kubenswrapper[4970]: I0930 10:15:02.154699 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" event={"ID":"75a99a4f-0e77-4cae-9c73-613144df96e0","Type":"ContainerDied","Data":"d0014e27ac2c93eaea201de7b1c8f872ae79cccc4d9cc5c7e9e6337ff8a087dc"} Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.473141 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.597553 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9brw\" (UniqueName: \"kubernetes.io/projected/75a99a4f-0e77-4cae-9c73-613144df96e0-kube-api-access-p9brw\") pod \"75a99a4f-0e77-4cae-9c73-613144df96e0\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.597628 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a99a4f-0e77-4cae-9c73-613144df96e0-config-volume\") pod \"75a99a4f-0e77-4cae-9c73-613144df96e0\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.597854 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a99a4f-0e77-4cae-9c73-613144df96e0-secret-volume\") pod \"75a99a4f-0e77-4cae-9c73-613144df96e0\" (UID: \"75a99a4f-0e77-4cae-9c73-613144df96e0\") " Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.598787 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75a99a4f-0e77-4cae-9c73-613144df96e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "75a99a4f-0e77-4cae-9c73-613144df96e0" (UID: "75a99a4f-0e77-4cae-9c73-613144df96e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.606551 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75a99a4f-0e77-4cae-9c73-613144df96e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75a99a4f-0e77-4cae-9c73-613144df96e0" (UID: "75a99a4f-0e77-4cae-9c73-613144df96e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.607436 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75a99a4f-0e77-4cae-9c73-613144df96e0-kube-api-access-p9brw" (OuterVolumeSpecName: "kube-api-access-p9brw") pod "75a99a4f-0e77-4cae-9c73-613144df96e0" (UID: "75a99a4f-0e77-4cae-9c73-613144df96e0"). InnerVolumeSpecName "kube-api-access-p9brw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.700420 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75a99a4f-0e77-4cae-9c73-613144df96e0-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.700454 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9brw\" (UniqueName: \"kubernetes.io/projected/75a99a4f-0e77-4cae-9c73-613144df96e0-kube-api-access-p9brw\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:03 crc kubenswrapper[4970]: I0930 10:15:03.700464 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75a99a4f-0e77-4cae-9c73-613144df96e0-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:04 crc kubenswrapper[4970]: I0930 10:15:04.179248 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" event={"ID":"75a99a4f-0e77-4cae-9c73-613144df96e0","Type":"ContainerDied","Data":"6e0a79a6b42ba15169a0bbd323c32fec10f0722b9289000ac0e42790e686ab1b"} Sep 30 10:15:04 crc kubenswrapper[4970]: I0930 10:15:04.179607 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e0a79a6b42ba15169a0bbd323c32fec10f0722b9289000ac0e42790e686ab1b" Sep 30 10:15:04 crc kubenswrapper[4970]: I0930 10:15:04.179513 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320455-dqd2s" Sep 30 10:15:04 crc kubenswrapper[4970]: I0930 10:15:04.669159 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:15:04 crc kubenswrapper[4970]: E0930 10:15:04.669462 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:15:07 crc kubenswrapper[4970]: I0930 10:15:07.059328 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n2fjv"] Sep 30 10:15:07 crc kubenswrapper[4970]: I0930 10:15:07.072268 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-n2fjv"] Sep 30 10:15:07 crc kubenswrapper[4970]: I0930 10:15:07.682954 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a" path="/var/lib/kubelet/pods/83571c5f-f5d7-47e3-8e2c-a0be1f7f0f1a/volumes" Sep 30 10:15:19 crc kubenswrapper[4970]: I0930 10:15:19.668148 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:15:19 crc kubenswrapper[4970]: E0930 10:15:19.669136 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:15:29 crc kubenswrapper[4970]: I0930 10:15:29.056284 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rsdz"] Sep 30 10:15:29 crc kubenswrapper[4970]: I0930 10:15:29.064766 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-9rsdz"] Sep 30 10:15:29 crc kubenswrapper[4970]: I0930 10:15:29.685051 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21aae404-5fe4-4df2-8f82-b860e665a2d8" path="/var/lib/kubelet/pods/21aae404-5fe4-4df2-8f82-b860e665a2d8/volumes" Sep 30 10:15:31 crc kubenswrapper[4970]: I0930 10:15:31.027411 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnxht"] Sep 30 10:15:31 crc kubenswrapper[4970]: I0930 10:15:31.040940 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-fnxht"] Sep 30 10:15:31 crc kubenswrapper[4970]: I0930 10:15:31.677855 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69d21ef3-2e4e-4226-935f-b09feb8c4d19" path="/var/lib/kubelet/pods/69d21ef3-2e4e-4226-935f-b09feb8c4d19/volumes" Sep 30 10:15:32 crc kubenswrapper[4970]: I0930 10:15:32.668725 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:15:32 crc kubenswrapper[4970]: E0930 10:15:32.669342 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:15:43 crc kubenswrapper[4970]: I0930 10:15:43.513851 4970 generic.go:334] "Generic (PLEG): container finished" podID="039233e2-0b03-4514-b359-5552e4d09ffc" containerID="7b823b260d70298975652f629363b09419ada5c394ad129b81f3bd283e9f9a8b" exitCode=0 Sep 30 10:15:43 crc kubenswrapper[4970]: I0930 10:15:43.513958 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" event={"ID":"039233e2-0b03-4514-b359-5552e4d09ffc","Type":"ContainerDied","Data":"7b823b260d70298975652f629363b09419ada5c394ad129b81f3bd283e9f9a8b"} Sep 30 10:15:44 crc kubenswrapper[4970]: I0930 10:15:44.928305 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.038917 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-ssh-key\") pod \"039233e2-0b03-4514-b359-5552e4d09ffc\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.039291 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42gj5\" (UniqueName: \"kubernetes.io/projected/039233e2-0b03-4514-b359-5552e4d09ffc-kube-api-access-42gj5\") pod \"039233e2-0b03-4514-b359-5552e4d09ffc\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.039599 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-inventory\") pod \"039233e2-0b03-4514-b359-5552e4d09ffc\" (UID: \"039233e2-0b03-4514-b359-5552e4d09ffc\") " Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.048237 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/039233e2-0b03-4514-b359-5552e4d09ffc-kube-api-access-42gj5" (OuterVolumeSpecName: "kube-api-access-42gj5") pod "039233e2-0b03-4514-b359-5552e4d09ffc" (UID: "039233e2-0b03-4514-b359-5552e4d09ffc"). InnerVolumeSpecName "kube-api-access-42gj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.072343 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "039233e2-0b03-4514-b359-5552e4d09ffc" (UID: "039233e2-0b03-4514-b359-5552e4d09ffc"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.073158 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-inventory" (OuterVolumeSpecName: "inventory") pod "039233e2-0b03-4514-b359-5552e4d09ffc" (UID: "039233e2-0b03-4514-b359-5552e4d09ffc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.142210 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.142241 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/039233e2-0b03-4514-b359-5552e4d09ffc-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.142257 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42gj5\" (UniqueName: \"kubernetes.io/projected/039233e2-0b03-4514-b359-5552e4d09ffc-kube-api-access-42gj5\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.533901 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" event={"ID":"039233e2-0b03-4514-b359-5552e4d09ffc","Type":"ContainerDied","Data":"e1551cdde06035b718b9e2f1ef1b5fa84bb1e4e068e12d874a7115401ae83962"} Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.534232 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1551cdde06035b718b9e2f1ef1b5fa84bb1e4e068e12d874a7115401ae83962" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.533981 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.633693 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fccfg"] Sep 30 10:15:45 crc kubenswrapper[4970]: E0930 10:15:45.634057 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75a99a4f-0e77-4cae-9c73-613144df96e0" containerName="collect-profiles" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.634072 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="75a99a4f-0e77-4cae-9c73-613144df96e0" containerName="collect-profiles" Sep 30 10:15:45 crc kubenswrapper[4970]: E0930 10:15:45.634119 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="039233e2-0b03-4514-b359-5552e4d09ffc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.634129 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="039233e2-0b03-4514-b359-5552e4d09ffc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.634293 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="039233e2-0b03-4514-b359-5552e4d09ffc" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.634309 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="75a99a4f-0e77-4cae-9c73-613144df96e0" containerName="collect-profiles" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.634902 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.638623 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.638715 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.638776 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.638649 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.646123 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fccfg"] Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.754389 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kvg7\" (UniqueName: \"kubernetes.io/projected/2d7598c9-6363-43e1-8913-1b7707fb57eb-kube-api-access-9kvg7\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.754657 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.755073 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.857297 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.857442 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kvg7\" (UniqueName: \"kubernetes.io/projected/2d7598c9-6363-43e1-8913-1b7707fb57eb-kube-api-access-9kvg7\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.857517 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.863639 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.864926 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:45 crc kubenswrapper[4970]: I0930 10:15:45.881009 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kvg7\" (UniqueName: \"kubernetes.io/projected/2d7598c9-6363-43e1-8913-1b7707fb57eb-kube-api-access-9kvg7\") pod \"ssh-known-hosts-edpm-deployment-fccfg\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:46 crc kubenswrapper[4970]: I0930 10:15:46.017221 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:46 crc kubenswrapper[4970]: I0930 10:15:46.573020 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-fccfg"] Sep 30 10:15:47 crc kubenswrapper[4970]: I0930 10:15:47.558719 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" event={"ID":"2d7598c9-6363-43e1-8913-1b7707fb57eb","Type":"ContainerStarted","Data":"5bc0ef78b3dfe9d0837b7ab5d1710db525dc4f16428afafc7470c03d73a79412"} Sep 30 10:15:47 crc kubenswrapper[4970]: I0930 10:15:47.559184 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" event={"ID":"2d7598c9-6363-43e1-8913-1b7707fb57eb","Type":"ContainerStarted","Data":"d23958cd01ad85315a3ab5bcf113d419d63efa7b38c3faf818df4e8de6a52242"} Sep 30 10:15:47 crc kubenswrapper[4970]: I0930 10:15:47.581706 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" podStartSLOduration=1.986078486 podStartE2EDuration="2.581678665s" podCreationTimestamp="2025-09-30 10:15:45 +0000 UTC" firstStartedPulling="2025-09-30 10:15:46.572761112 +0000 UTC m=+1759.644612076" lastFinishedPulling="2025-09-30 10:15:47.168361321 +0000 UTC m=+1760.240212255" observedRunningTime="2025-09-30 10:15:47.579617299 +0000 UTC m=+1760.651468273" watchObservedRunningTime="2025-09-30 10:15:47.581678665 +0000 UTC m=+1760.653529639" Sep 30 10:15:47 crc kubenswrapper[4970]: I0930 10:15:47.696262 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:15:47 crc kubenswrapper[4970]: E0930 10:15:47.697011 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:15:53 crc kubenswrapper[4970]: I0930 10:15:53.646444 4970 scope.go:117] "RemoveContainer" containerID="c0e88f3b9d194702638da51d118b3042eb0387a108bc02f9b7cde503f0e9a87e" Sep 30 10:15:53 crc kubenswrapper[4970]: I0930 10:15:53.708696 4970 scope.go:117] "RemoveContainer" containerID="1367a89aff58b12a8df8031425b319f2849c4ce4cb6336435e25b4c4c2467f7a" Sep 30 10:15:53 crc kubenswrapper[4970]: I0930 10:15:53.757557 4970 scope.go:117] "RemoveContainer" containerID="d32a28edb56f970e173492cb5914adbf73d93dd41742703b61dfead0036cf1c5" Sep 30 10:15:54 crc kubenswrapper[4970]: I0930 10:15:54.631495 4970 generic.go:334] "Generic (PLEG): container finished" podID="2d7598c9-6363-43e1-8913-1b7707fb57eb" containerID="5bc0ef78b3dfe9d0837b7ab5d1710db525dc4f16428afafc7470c03d73a79412" exitCode=0 Sep 30 10:15:54 crc kubenswrapper[4970]: I0930 10:15:54.631618 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" event={"ID":"2d7598c9-6363-43e1-8913-1b7707fb57eb","Type":"ContainerDied","Data":"5bc0ef78b3dfe9d0837b7ab5d1710db525dc4f16428afafc7470c03d73a79412"} Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.106311 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.185716 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-ssh-key-openstack-edpm-ipam\") pod \"2d7598c9-6363-43e1-8913-1b7707fb57eb\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.185883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-inventory-0\") pod \"2d7598c9-6363-43e1-8913-1b7707fb57eb\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.186116 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kvg7\" (UniqueName: \"kubernetes.io/projected/2d7598c9-6363-43e1-8913-1b7707fb57eb-kube-api-access-9kvg7\") pod \"2d7598c9-6363-43e1-8913-1b7707fb57eb\" (UID: \"2d7598c9-6363-43e1-8913-1b7707fb57eb\") " Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.217877 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d7598c9-6363-43e1-8913-1b7707fb57eb-kube-api-access-9kvg7" (OuterVolumeSpecName: "kube-api-access-9kvg7") pod "2d7598c9-6363-43e1-8913-1b7707fb57eb" (UID: "2d7598c9-6363-43e1-8913-1b7707fb57eb"). InnerVolumeSpecName "kube-api-access-9kvg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.232881 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2d7598c9-6363-43e1-8913-1b7707fb57eb" (UID: "2d7598c9-6363-43e1-8913-1b7707fb57eb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.243452 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2d7598c9-6363-43e1-8913-1b7707fb57eb" (UID: "2d7598c9-6363-43e1-8913-1b7707fb57eb"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.297412 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.297606 4970 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2d7598c9-6363-43e1-8913-1b7707fb57eb-inventory-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.297669 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kvg7\" (UniqueName: \"kubernetes.io/projected/2d7598c9-6363-43e1-8913-1b7707fb57eb-kube-api-access-9kvg7\") on node \"crc\" DevicePath \"\"" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.651403 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" event={"ID":"2d7598c9-6363-43e1-8913-1b7707fb57eb","Type":"ContainerDied","Data":"d23958cd01ad85315a3ab5bcf113d419d63efa7b38c3faf818df4e8de6a52242"} Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.651451 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23958cd01ad85315a3ab5bcf113d419d63efa7b38c3faf818df4e8de6a52242" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.651483 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-fccfg" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.716089 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h"] Sep 30 10:15:56 crc kubenswrapper[4970]: E0930 10:15:56.716762 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d7598c9-6363-43e1-8913-1b7707fb57eb" containerName="ssh-known-hosts-edpm-deployment" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.716785 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d7598c9-6363-43e1-8913-1b7707fb57eb" containerName="ssh-known-hosts-edpm-deployment" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.717079 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d7598c9-6363-43e1-8913-1b7707fb57eb" containerName="ssh-known-hosts-edpm-deployment" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.718136 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.723608 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.723871 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.724446 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.726066 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h"] Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.728373 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.809200 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx8cs\" (UniqueName: \"kubernetes.io/projected/74d48203-8780-4ee2-8db2-39388705bab0-kube-api-access-zx8cs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.809572 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.810764 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.913095 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx8cs\" (UniqueName: \"kubernetes.io/projected/74d48203-8780-4ee2-8db2-39388705bab0-kube-api-access-zx8cs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.913167 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.913187 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.918255 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.919164 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:56 crc kubenswrapper[4970]: I0930 10:15:56.935828 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx8cs\" (UniqueName: \"kubernetes.io/projected/74d48203-8780-4ee2-8db2-39388705bab0-kube-api-access-zx8cs\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-vg96h\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:57 crc kubenswrapper[4970]: I0930 10:15:57.048196 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:15:57 crc kubenswrapper[4970]: I0930 10:15:57.610440 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h"] Sep 30 10:15:57 crc kubenswrapper[4970]: I0930 10:15:57.663957 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" event={"ID":"74d48203-8780-4ee2-8db2-39388705bab0","Type":"ContainerStarted","Data":"5b8eee16389441d4b34b5c1bcd277cdf50b894b5cece0be207f838a497180dd7"} Sep 30 10:15:58 crc kubenswrapper[4970]: I0930 10:15:58.669324 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:15:58 crc kubenswrapper[4970]: E0930 10:15:58.670131 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:15:58 crc kubenswrapper[4970]: I0930 10:15:58.674340 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" event={"ID":"74d48203-8780-4ee2-8db2-39388705bab0","Type":"ContainerStarted","Data":"b4c51fc20443207beb80d57efbfa08868304d131b7bf91649e83202fe35dd9ce"} Sep 30 10:16:06 crc kubenswrapper[4970]: I0930 10:16:06.754142 4970 generic.go:334] "Generic (PLEG): container finished" podID="74d48203-8780-4ee2-8db2-39388705bab0" containerID="b4c51fc20443207beb80d57efbfa08868304d131b7bf91649e83202fe35dd9ce" exitCode=0 Sep 30 10:16:06 crc kubenswrapper[4970]: I0930 10:16:06.754276 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" event={"ID":"74d48203-8780-4ee2-8db2-39388705bab0","Type":"ContainerDied","Data":"b4c51fc20443207beb80d57efbfa08868304d131b7bf91649e83202fe35dd9ce"} Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.165160 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.243748 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-inventory\") pod \"74d48203-8780-4ee2-8db2-39388705bab0\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.244016 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx8cs\" (UniqueName: \"kubernetes.io/projected/74d48203-8780-4ee2-8db2-39388705bab0-kube-api-access-zx8cs\") pod \"74d48203-8780-4ee2-8db2-39388705bab0\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.244086 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-ssh-key\") pod \"74d48203-8780-4ee2-8db2-39388705bab0\" (UID: \"74d48203-8780-4ee2-8db2-39388705bab0\") " Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.255150 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74d48203-8780-4ee2-8db2-39388705bab0-kube-api-access-zx8cs" (OuterVolumeSpecName: "kube-api-access-zx8cs") pod "74d48203-8780-4ee2-8db2-39388705bab0" (UID: "74d48203-8780-4ee2-8db2-39388705bab0"). InnerVolumeSpecName "kube-api-access-zx8cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.275474 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-inventory" (OuterVolumeSpecName: "inventory") pod "74d48203-8780-4ee2-8db2-39388705bab0" (UID: "74d48203-8780-4ee2-8db2-39388705bab0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.275599 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "74d48203-8780-4ee2-8db2-39388705bab0" (UID: "74d48203-8780-4ee2-8db2-39388705bab0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.346720 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx8cs\" (UniqueName: \"kubernetes.io/projected/74d48203-8780-4ee2-8db2-39388705bab0-kube-api-access-zx8cs\") on node \"crc\" DevicePath \"\"" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.346759 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.346772 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/74d48203-8780-4ee2-8db2-39388705bab0-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.790176 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" event={"ID":"74d48203-8780-4ee2-8db2-39388705bab0","Type":"ContainerDied","Data":"5b8eee16389441d4b34b5c1bcd277cdf50b894b5cece0be207f838a497180dd7"} Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.790223 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b8eee16389441d4b34b5c1bcd277cdf50b894b5cece0be207f838a497180dd7" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.790272 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-vg96h" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.858377 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj"] Sep 30 10:16:08 crc kubenswrapper[4970]: E0930 10:16:08.859041 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74d48203-8780-4ee2-8db2-39388705bab0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.859057 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="74d48203-8780-4ee2-8db2-39388705bab0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.859318 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="74d48203-8780-4ee2-8db2-39388705bab0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.860389 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.863083 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.863196 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.863314 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.864898 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.868047 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj"] Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.957715 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spxpn\" (UniqueName: \"kubernetes.io/projected/0cf74a13-4f04-472b-af17-1c856152950f-kube-api-access-spxpn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.957764 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:08 crc kubenswrapper[4970]: I0930 10:16:08.957809 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.059880 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spxpn\" (UniqueName: \"kubernetes.io/projected/0cf74a13-4f04-472b-af17-1c856152950f-kube-api-access-spxpn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.059934 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.059978 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.064799 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.064839 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.077233 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spxpn\" (UniqueName: \"kubernetes.io/projected/0cf74a13-4f04-472b-af17-1c856152950f-kube-api-access-spxpn\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.197166 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.668490 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:16:09 crc kubenswrapper[4970]: E0930 10:16:09.668807 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.762656 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj"] Sep 30 10:16:09 crc kubenswrapper[4970]: I0930 10:16:09.805438 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" event={"ID":"0cf74a13-4f04-472b-af17-1c856152950f","Type":"ContainerStarted","Data":"9a8b75e733846876df8c6b4aef5b882d78e2a48c90070f0b58c610cad0e3990d"} Sep 30 10:16:13 crc kubenswrapper[4970]: I0930 10:16:13.051264 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv252"] Sep 30 10:16:13 crc kubenswrapper[4970]: I0930 10:16:13.058880 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-sv252"] Sep 30 10:16:13 crc kubenswrapper[4970]: I0930 10:16:13.795274 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05036766-e1fb-4fee-b4bc-5ff318fa9793" path="/var/lib/kubelet/pods/05036766-e1fb-4fee-b4bc-5ff318fa9793/volumes" Sep 30 10:16:14 crc kubenswrapper[4970]: I0930 10:16:14.854694 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" event={"ID":"0cf74a13-4f04-472b-af17-1c856152950f","Type":"ContainerStarted","Data":"056ea9fe2e92a6c1d4261e3807eec58b4aa9da4ec4a3fc04ad7eeda64432f8d3"} Sep 30 10:16:14 crc kubenswrapper[4970]: I0930 10:16:14.873310 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" podStartSLOduration=2.563214865 podStartE2EDuration="6.873293934s" podCreationTimestamp="2025-09-30 10:16:08 +0000 UTC" firstStartedPulling="2025-09-30 10:16:09.771265085 +0000 UTC m=+1782.843116019" lastFinishedPulling="2025-09-30 10:16:14.081344134 +0000 UTC m=+1787.153195088" observedRunningTime="2025-09-30 10:16:14.871624969 +0000 UTC m=+1787.943475903" watchObservedRunningTime="2025-09-30 10:16:14.873293934 +0000 UTC m=+1787.945144868" Sep 30 10:16:23 crc kubenswrapper[4970]: I0930 10:16:23.932935 4970 generic.go:334] "Generic (PLEG): container finished" podID="0cf74a13-4f04-472b-af17-1c856152950f" containerID="056ea9fe2e92a6c1d4261e3807eec58b4aa9da4ec4a3fc04ad7eeda64432f8d3" exitCode=0 Sep 30 10:16:23 crc kubenswrapper[4970]: I0930 10:16:23.933030 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" event={"ID":"0cf74a13-4f04-472b-af17-1c856152950f","Type":"ContainerDied","Data":"056ea9fe2e92a6c1d4261e3807eec58b4aa9da4ec4a3fc04ad7eeda64432f8d3"} Sep 30 10:16:24 crc kubenswrapper[4970]: I0930 10:16:24.668237 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:16:24 crc kubenswrapper[4970]: E0930 10:16:24.669032 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.384548 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.517944 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-ssh-key\") pod \"0cf74a13-4f04-472b-af17-1c856152950f\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.518083 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spxpn\" (UniqueName: \"kubernetes.io/projected/0cf74a13-4f04-472b-af17-1c856152950f-kube-api-access-spxpn\") pod \"0cf74a13-4f04-472b-af17-1c856152950f\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.518211 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-inventory\") pod \"0cf74a13-4f04-472b-af17-1c856152950f\" (UID: \"0cf74a13-4f04-472b-af17-1c856152950f\") " Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.524033 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cf74a13-4f04-472b-af17-1c856152950f-kube-api-access-spxpn" (OuterVolumeSpecName: "kube-api-access-spxpn") pod "0cf74a13-4f04-472b-af17-1c856152950f" (UID: "0cf74a13-4f04-472b-af17-1c856152950f"). InnerVolumeSpecName "kube-api-access-spxpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.546442 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0cf74a13-4f04-472b-af17-1c856152950f" (UID: "0cf74a13-4f04-472b-af17-1c856152950f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.547375 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-inventory" (OuterVolumeSpecName: "inventory") pod "0cf74a13-4f04-472b-af17-1c856152950f" (UID: "0cf74a13-4f04-472b-af17-1c856152950f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.620456 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.620484 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0cf74a13-4f04-472b-af17-1c856152950f-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.620494 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spxpn\" (UniqueName: \"kubernetes.io/projected/0cf74a13-4f04-472b-af17-1c856152950f-kube-api-access-spxpn\") on node \"crc\" DevicePath \"\"" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.959955 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" event={"ID":"0cf74a13-4f04-472b-af17-1c856152950f","Type":"ContainerDied","Data":"9a8b75e733846876df8c6b4aef5b882d78e2a48c90070f0b58c610cad0e3990d"} Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.960080 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a8b75e733846876df8c6b4aef5b882d78e2a48c90070f0b58c610cad0e3990d" Sep 30 10:16:25 crc kubenswrapper[4970]: I0930 10:16:25.960188 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.062566 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc"] Sep 30 10:16:26 crc kubenswrapper[4970]: E0930 10:16:26.062939 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cf74a13-4f04-472b-af17-1c856152950f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.062958 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cf74a13-4f04-472b-af17-1c856152950f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.063165 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cf74a13-4f04-472b-af17-1c856152950f" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.063773 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.067002 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.070413 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.070420 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.070640 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.071258 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.071587 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.072038 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.075052 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.082942 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc"] Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233252 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233436 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233516 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233579 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233723 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233785 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233833 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233867 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.233930 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.234089 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.234138 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.234184 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.234225 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.234262 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6zb\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-kube-api-access-ls6zb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336255 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336372 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336430 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336477 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336592 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336631 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336677 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336713 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336768 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336851 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336909 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.336982 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.337100 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.337211 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6zb\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-kube-api-access-ls6zb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.342154 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.342230 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.342345 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.342488 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.342397 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.342908 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.343149 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.343301 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.344472 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.347804 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.347746 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.349391 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.351272 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.357621 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6zb\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-kube-api-access-ls6zb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.385290 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.952016 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc"] Sep 30 10:16:26 crc kubenswrapper[4970]: W0930 10:16:26.952327 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd25a3525_d30d_4e16_b446_dace1e7987a0.slice/crio-74c592b2a9c49b4bfa2584f25c5a57996dceff172d02b20fe4f23c758193c101 WatchSource:0}: Error finding container 74c592b2a9c49b4bfa2584f25c5a57996dceff172d02b20fe4f23c758193c101: Status 404 returned error can't find the container with id 74c592b2a9c49b4bfa2584f25c5a57996dceff172d02b20fe4f23c758193c101 Sep 30 10:16:26 crc kubenswrapper[4970]: I0930 10:16:26.970761 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" event={"ID":"d25a3525-d30d-4e16-b446-dace1e7987a0","Type":"ContainerStarted","Data":"74c592b2a9c49b4bfa2584f25c5a57996dceff172d02b20fe4f23c758193c101"} Sep 30 10:16:27 crc kubenswrapper[4970]: I0930 10:16:27.981974 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" event={"ID":"d25a3525-d30d-4e16-b446-dace1e7987a0","Type":"ContainerStarted","Data":"5378c34a1f102dd448b6ca499d057410cf72b22f289270d08e5e454b0b355a95"} Sep 30 10:16:28 crc kubenswrapper[4970]: I0930 10:16:28.006138 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" podStartSLOduration=1.5942746589999999 podStartE2EDuration="2.006119463s" podCreationTimestamp="2025-09-30 10:16:26 +0000 UTC" firstStartedPulling="2025-09-30 10:16:26.955618302 +0000 UTC m=+1800.027469246" lastFinishedPulling="2025-09-30 10:16:27.367463116 +0000 UTC m=+1800.439314050" observedRunningTime="2025-09-30 10:16:28.00456591 +0000 UTC m=+1801.076416844" watchObservedRunningTime="2025-09-30 10:16:28.006119463 +0000 UTC m=+1801.077970397" Sep 30 10:16:36 crc kubenswrapper[4970]: I0930 10:16:36.668919 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:16:36 crc kubenswrapper[4970]: E0930 10:16:36.669828 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:16:48 crc kubenswrapper[4970]: I0930 10:16:48.668785 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:16:48 crc kubenswrapper[4970]: E0930 10:16:48.669721 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:16:53 crc kubenswrapper[4970]: I0930 10:16:53.878676 4970 scope.go:117] "RemoveContainer" containerID="709a65b17e3ac82455041dd6bf3eb091fd4efe7d568274319b67ad3be658cd43" Sep 30 10:17:01 crc kubenswrapper[4970]: I0930 10:17:01.668572 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:17:01 crc kubenswrapper[4970]: E0930 10:17:01.669338 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:17:04 crc kubenswrapper[4970]: I0930 10:17:04.328756 4970 generic.go:334] "Generic (PLEG): container finished" podID="d25a3525-d30d-4e16-b446-dace1e7987a0" containerID="5378c34a1f102dd448b6ca499d057410cf72b22f289270d08e5e454b0b355a95" exitCode=0 Sep 30 10:17:04 crc kubenswrapper[4970]: I0930 10:17:04.328981 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" event={"ID":"d25a3525-d30d-4e16-b446-dace1e7987a0","Type":"ContainerDied","Data":"5378c34a1f102dd448b6ca499d057410cf72b22f289270d08e5e454b0b355a95"} Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.781702 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786444 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-inventory\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786521 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls6zb\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-kube-api-access-ls6zb\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786602 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-repo-setup-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786659 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ssh-key\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786710 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786757 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-neutron-metadata-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786815 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-libvirt-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786838 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ovn-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786895 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.786925 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-bootstrap-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.794098 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.794836 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.794999 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.795769 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.796049 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.796187 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.797365 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.798090 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-kube-api-access-ls6zb" (OuterVolumeSpecName: "kube-api-access-ls6zb") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "kube-api-access-ls6zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.831428 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-inventory" (OuterVolumeSpecName: "inventory") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.844144 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.889287 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.889580 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-nova-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.889607 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-ovn-default-certs-0\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.889625 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-telemetry-combined-ca-bundle\") pod \"d25a3525-d30d-4e16-b446-dace1e7987a0\" (UID: \"d25a3525-d30d-4e16-b446-dace1e7987a0\") " Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890005 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890021 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890030 4970 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890040 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890048 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls6zb\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-kube-api-access-ls6zb\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890056 4970 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890064 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890073 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890081 4970 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.890089 4970 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.892611 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.893598 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.894193 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.896265 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "d25a3525-d30d-4e16-b446-dace1e7987a0" (UID: "d25a3525-d30d-4e16-b446-dace1e7987a0"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.992644 4970 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.992676 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.992690 4970 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d25a3525-d30d-4e16-b446-dace1e7987a0-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:05 crc kubenswrapper[4970]: I0930 10:17:05.992700 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/d25a3525-d30d-4e16-b446-dace1e7987a0-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.359610 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" event={"ID":"d25a3525-d30d-4e16-b446-dace1e7987a0","Type":"ContainerDied","Data":"74c592b2a9c49b4bfa2584f25c5a57996dceff172d02b20fe4f23c758193c101"} Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.359648 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74c592b2a9c49b4bfa2584f25c5a57996dceff172d02b20fe4f23c758193c101" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.359646 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.527405 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r"] Sep 30 10:17:06 crc kubenswrapper[4970]: E0930 10:17:06.527803 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d25a3525-d30d-4e16-b446-dace1e7987a0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.527822 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d25a3525-d30d-4e16-b446-dace1e7987a0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.528059 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d25a3525-d30d-4e16-b446-dace1e7987a0" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.528896 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.531058 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.531223 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.531715 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.532119 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.532229 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.547940 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r"] Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.708042 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.708119 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.708215 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/450706d9-395f-417b-b37d-7ded156dce3a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.708245 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.708357 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkxr\" (UniqueName: \"kubernetes.io/projected/450706d9-395f-417b-b37d-7ded156dce3a-kube-api-access-nhkxr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.810762 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.810845 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.810968 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/450706d9-395f-417b-b37d-7ded156dce3a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.811016 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.811102 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhkxr\" (UniqueName: \"kubernetes.io/projected/450706d9-395f-417b-b37d-7ded156dce3a-kube-api-access-nhkxr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.812686 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/450706d9-395f-417b-b37d-7ded156dce3a-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.814511 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.814512 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.815213 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.831196 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhkxr\" (UniqueName: \"kubernetes.io/projected/450706d9-395f-417b-b37d-7ded156dce3a-kube-api-access-nhkxr\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dwx4r\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:06 crc kubenswrapper[4970]: I0930 10:17:06.889951 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:17:07 crc kubenswrapper[4970]: I0930 10:17:07.403740 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r"] Sep 30 10:17:08 crc kubenswrapper[4970]: I0930 10:17:08.379157 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" event={"ID":"450706d9-395f-417b-b37d-7ded156dce3a","Type":"ContainerStarted","Data":"a74dfb3c37a473edcaa53c4c910fadf21e5bf65783a7c362eaf0da25ab93e644"} Sep 30 10:17:08 crc kubenswrapper[4970]: I0930 10:17:08.382365 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" event={"ID":"450706d9-395f-417b-b37d-7ded156dce3a","Type":"ContainerStarted","Data":"e55daa2b61900165dc9e198fe13c0c9553b7323539b4a9c2881ec411b74236a6"} Sep 30 10:17:08 crc kubenswrapper[4970]: I0930 10:17:08.394443 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" podStartSLOduration=1.902597304 podStartE2EDuration="2.394421855s" podCreationTimestamp="2025-09-30 10:17:06 +0000 UTC" firstStartedPulling="2025-09-30 10:17:07.409300982 +0000 UTC m=+1840.481151916" lastFinishedPulling="2025-09-30 10:17:07.901125533 +0000 UTC m=+1840.972976467" observedRunningTime="2025-09-30 10:17:08.393386317 +0000 UTC m=+1841.465237291" watchObservedRunningTime="2025-09-30 10:17:08.394421855 +0000 UTC m=+1841.466272789" Sep 30 10:17:14 crc kubenswrapper[4970]: I0930 10:17:14.668325 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:17:14 crc kubenswrapper[4970]: E0930 10:17:14.669132 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:17:28 crc kubenswrapper[4970]: I0930 10:17:28.668944 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:17:28 crc kubenswrapper[4970]: E0930 10:17:28.669841 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:17:43 crc kubenswrapper[4970]: I0930 10:17:43.669588 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:17:43 crc kubenswrapper[4970]: E0930 10:17:43.670606 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:17:57 crc kubenswrapper[4970]: I0930 10:17:57.677161 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:17:57 crc kubenswrapper[4970]: E0930 10:17:57.677837 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:18:00 crc kubenswrapper[4970]: I0930 10:18:00.733718 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2cx6p"] Sep 30 10:18:00 crc kubenswrapper[4970]: I0930 10:18:00.737638 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:00 crc kubenswrapper[4970]: I0930 10:18:00.756604 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cx6p"] Sep 30 10:18:00 crc kubenswrapper[4970]: I0930 10:18:00.921638 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-utilities\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:00 crc kubenswrapper[4970]: I0930 10:18:00.921924 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-catalog-content\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:00 crc kubenswrapper[4970]: I0930 10:18:00.922076 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz6bv\" (UniqueName: \"kubernetes.io/projected/f54c8400-c95f-4b8d-8a34-bee15e7112ad-kube-api-access-cz6bv\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.024281 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-catalog-content\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.024369 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz6bv\" (UniqueName: \"kubernetes.io/projected/f54c8400-c95f-4b8d-8a34-bee15e7112ad-kube-api-access-cz6bv\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.024419 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-utilities\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.024810 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-catalog-content\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.024843 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-utilities\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.045800 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz6bv\" (UniqueName: \"kubernetes.io/projected/f54c8400-c95f-4b8d-8a34-bee15e7112ad-kube-api-access-cz6bv\") pod \"redhat-marketplace-2cx6p\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.069176 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.328053 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9pfx"] Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.332979 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.341616 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9pfx"] Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.419834 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cx6p"] Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.430651 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-utilities\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.430716 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26889\" (UniqueName: \"kubernetes.io/projected/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-kube-api-access-26889\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.430783 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-catalog-content\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.532724 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-utilities\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.532830 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26889\" (UniqueName: \"kubernetes.io/projected/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-kube-api-access-26889\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.532925 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-catalog-content\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.533338 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-utilities\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.533391 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-catalog-content\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.562058 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26889\" (UniqueName: \"kubernetes.io/projected/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-kube-api-access-26889\") pod \"certified-operators-l9pfx\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.659062 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.861793 4970 generic.go:334] "Generic (PLEG): container finished" podID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerID="9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2" exitCode=0 Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.862074 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cx6p" event={"ID":"f54c8400-c95f-4b8d-8a34-bee15e7112ad","Type":"ContainerDied","Data":"9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2"} Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.862102 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cx6p" event={"ID":"f54c8400-c95f-4b8d-8a34-bee15e7112ad","Type":"ContainerStarted","Data":"6b2b815aae6383d8f7d292cbcae17cd739e54d0bc7f4134650401d116d9389ae"} Sep 30 10:18:01 crc kubenswrapper[4970]: I0930 10:18:01.873124 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:18:02 crc kubenswrapper[4970]: I0930 10:18:02.175045 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9pfx"] Sep 30 10:18:02 crc kubenswrapper[4970]: I0930 10:18:02.871495 4970 generic.go:334] "Generic (PLEG): container finished" podID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerID="127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0" exitCode=0 Sep 30 10:18:02 crc kubenswrapper[4970]: I0930 10:18:02.871578 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerDied","Data":"127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0"} Sep 30 10:18:02 crc kubenswrapper[4970]: I0930 10:18:02.871767 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerStarted","Data":"d957c4edfe67cda20a4e29cf9397606e28b751e4797abf3651f4c40a843b742e"} Sep 30 10:18:03 crc kubenswrapper[4970]: I0930 10:18:03.888301 4970 generic.go:334] "Generic (PLEG): container finished" podID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerID="d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe" exitCode=0 Sep 30 10:18:03 crc kubenswrapper[4970]: I0930 10:18:03.888374 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cx6p" event={"ID":"f54c8400-c95f-4b8d-8a34-bee15e7112ad","Type":"ContainerDied","Data":"d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe"} Sep 30 10:18:03 crc kubenswrapper[4970]: I0930 10:18:03.890108 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerStarted","Data":"fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e"} Sep 30 10:18:04 crc kubenswrapper[4970]: I0930 10:18:04.900546 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cx6p" event={"ID":"f54c8400-c95f-4b8d-8a34-bee15e7112ad","Type":"ContainerStarted","Data":"4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2"} Sep 30 10:18:04 crc kubenswrapper[4970]: I0930 10:18:04.925054 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2cx6p" podStartSLOduration=2.469614683 podStartE2EDuration="4.925035422s" podCreationTimestamp="2025-09-30 10:18:00 +0000 UTC" firstStartedPulling="2025-09-30 10:18:01.872800287 +0000 UTC m=+1894.944651221" lastFinishedPulling="2025-09-30 10:18:04.328221026 +0000 UTC m=+1897.400071960" observedRunningTime="2025-09-30 10:18:04.919432308 +0000 UTC m=+1897.991283242" watchObservedRunningTime="2025-09-30 10:18:04.925035422 +0000 UTC m=+1897.996886356" Sep 30 10:18:05 crc kubenswrapper[4970]: I0930 10:18:05.910711 4970 generic.go:334] "Generic (PLEG): container finished" podID="450706d9-395f-417b-b37d-7ded156dce3a" containerID="a74dfb3c37a473edcaa53c4c910fadf21e5bf65783a7c362eaf0da25ab93e644" exitCode=0 Sep 30 10:18:05 crc kubenswrapper[4970]: I0930 10:18:05.910807 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" event={"ID":"450706d9-395f-417b-b37d-7ded156dce3a","Type":"ContainerDied","Data":"a74dfb3c37a473edcaa53c4c910fadf21e5bf65783a7c362eaf0da25ab93e644"} Sep 30 10:18:06 crc kubenswrapper[4970]: I0930 10:18:06.925687 4970 generic.go:334] "Generic (PLEG): container finished" podID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerID="fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e" exitCode=0 Sep 30 10:18:06 crc kubenswrapper[4970]: I0930 10:18:06.925742 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerDied","Data":"fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e"} Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.345608 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.457441 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-inventory\") pod \"450706d9-395f-417b-b37d-7ded156dce3a\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.457509 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/450706d9-395f-417b-b37d-7ded156dce3a-ovncontroller-config-0\") pod \"450706d9-395f-417b-b37d-7ded156dce3a\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.457585 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ovn-combined-ca-bundle\") pod \"450706d9-395f-417b-b37d-7ded156dce3a\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.457825 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ssh-key\") pod \"450706d9-395f-417b-b37d-7ded156dce3a\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.457857 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhkxr\" (UniqueName: \"kubernetes.io/projected/450706d9-395f-417b-b37d-7ded156dce3a-kube-api-access-nhkxr\") pod \"450706d9-395f-417b-b37d-7ded156dce3a\" (UID: \"450706d9-395f-417b-b37d-7ded156dce3a\") " Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.465955 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/450706d9-395f-417b-b37d-7ded156dce3a-kube-api-access-nhkxr" (OuterVolumeSpecName: "kube-api-access-nhkxr") pod "450706d9-395f-417b-b37d-7ded156dce3a" (UID: "450706d9-395f-417b-b37d-7ded156dce3a"). InnerVolumeSpecName "kube-api-access-nhkxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.471376 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "450706d9-395f-417b-b37d-7ded156dce3a" (UID: "450706d9-395f-417b-b37d-7ded156dce3a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.485708 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "450706d9-395f-417b-b37d-7ded156dce3a" (UID: "450706d9-395f-417b-b37d-7ded156dce3a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.496108 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-inventory" (OuterVolumeSpecName: "inventory") pod "450706d9-395f-417b-b37d-7ded156dce3a" (UID: "450706d9-395f-417b-b37d-7ded156dce3a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.513256 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/450706d9-395f-417b-b37d-7ded156dce3a-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "450706d9-395f-417b-b37d-7ded156dce3a" (UID: "450706d9-395f-417b-b37d-7ded156dce3a"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.560669 4970 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.560703 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.560716 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhkxr\" (UniqueName: \"kubernetes.io/projected/450706d9-395f-417b-b37d-7ded156dce3a-kube-api-access-nhkxr\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.560727 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/450706d9-395f-417b-b37d-7ded156dce3a-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.560738 4970 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/450706d9-395f-417b-b37d-7ded156dce3a-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.937649 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerStarted","Data":"c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617"} Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.941188 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" event={"ID":"450706d9-395f-417b-b37d-7ded156dce3a","Type":"ContainerDied","Data":"e55daa2b61900165dc9e198fe13c0c9553b7323539b4a9c2881ec411b74236a6"} Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.941227 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e55daa2b61900165dc9e198fe13c0c9553b7323539b4a9c2881ec411b74236a6" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.941277 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dwx4r" Sep 30 10:18:07 crc kubenswrapper[4970]: I0930 10:18:07.987642 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9pfx" podStartSLOduration=2.383012411 podStartE2EDuration="6.98758798s" podCreationTimestamp="2025-09-30 10:18:01 +0000 UTC" firstStartedPulling="2025-09-30 10:18:02.873219555 +0000 UTC m=+1895.945070479" lastFinishedPulling="2025-09-30 10:18:07.477795114 +0000 UTC m=+1900.549646048" observedRunningTime="2025-09-30 10:18:07.980378862 +0000 UTC m=+1901.052229786" watchObservedRunningTime="2025-09-30 10:18:07.98758798 +0000 UTC m=+1901.059438924" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.041147 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm"] Sep 30 10:18:08 crc kubenswrapper[4970]: E0930 10:18:08.041822 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="450706d9-395f-417b-b37d-7ded156dce3a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.041931 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="450706d9-395f-417b-b37d-7ded156dce3a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.042322 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="450706d9-395f-417b-b37d-7ded156dce3a" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.043365 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.046476 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.046538 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.046785 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.046799 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.047000 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.048443 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.052222 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm"] Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.170677 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.170730 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.170756 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.170816 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.170864 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.170886 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhvlt\" (UniqueName: \"kubernetes.io/projected/be128dee-e0c7-4db4-a760-b03c3b8d263d-kube-api-access-dhvlt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.272312 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.272363 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.272388 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.272447 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.272500 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.272525 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhvlt\" (UniqueName: \"kubernetes.io/projected/be128dee-e0c7-4db4-a760-b03c3b8d263d-kube-api-access-dhvlt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.277004 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.277313 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.278501 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.280355 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.287278 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.290847 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhvlt\" (UniqueName: \"kubernetes.io/projected/be128dee-e0c7-4db4-a760-b03c3b8d263d-kube-api-access-dhvlt\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.365753 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.928020 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm"] Sep 30 10:18:08 crc kubenswrapper[4970]: W0930 10:18:08.930135 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe128dee_e0c7_4db4_a760_b03c3b8d263d.slice/crio-1c152d3bb37eb902c963bacd99a1235ec79166f171b3b0aacaffd89d194d421f WatchSource:0}: Error finding container 1c152d3bb37eb902c963bacd99a1235ec79166f171b3b0aacaffd89d194d421f: Status 404 returned error can't find the container with id 1c152d3bb37eb902c963bacd99a1235ec79166f171b3b0aacaffd89d194d421f Sep 30 10:18:08 crc kubenswrapper[4970]: I0930 10:18:08.950855 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" event={"ID":"be128dee-e0c7-4db4-a760-b03c3b8d263d","Type":"ContainerStarted","Data":"1c152d3bb37eb902c963bacd99a1235ec79166f171b3b0aacaffd89d194d421f"} Sep 30 10:18:09 crc kubenswrapper[4970]: I0930 10:18:09.961692 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" event={"ID":"be128dee-e0c7-4db4-a760-b03c3b8d263d","Type":"ContainerStarted","Data":"1e6b39c7ad8d6daa37847f6745796bcbe440cb11a953121fea1547fd9e699a22"} Sep 30 10:18:09 crc kubenswrapper[4970]: I0930 10:18:09.986403 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" podStartSLOduration=1.470837776 podStartE2EDuration="1.986381211s" podCreationTimestamp="2025-09-30 10:18:08 +0000 UTC" firstStartedPulling="2025-09-30 10:18:08.932780804 +0000 UTC m=+1902.004631738" lastFinishedPulling="2025-09-30 10:18:09.448324229 +0000 UTC m=+1902.520175173" observedRunningTime="2025-09-30 10:18:09.982467484 +0000 UTC m=+1903.054318428" watchObservedRunningTime="2025-09-30 10:18:09.986381211 +0000 UTC m=+1903.058232145" Sep 30 10:18:10 crc kubenswrapper[4970]: I0930 10:18:10.668694 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:18:10 crc kubenswrapper[4970]: E0930 10:18:10.669312 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:18:11 crc kubenswrapper[4970]: I0930 10:18:11.069833 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:11 crc kubenswrapper[4970]: I0930 10:18:11.069909 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:11 crc kubenswrapper[4970]: I0930 10:18:11.152611 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:11 crc kubenswrapper[4970]: I0930 10:18:11.659949 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:11 crc kubenswrapper[4970]: I0930 10:18:11.660090 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:11 crc kubenswrapper[4970]: I0930 10:18:11.721954 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:12 crc kubenswrapper[4970]: I0930 10:18:12.071427 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:12 crc kubenswrapper[4970]: I0930 10:18:12.726528 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cx6p"] Sep 30 10:18:13 crc kubenswrapper[4970]: I0930 10:18:13.041159 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.008596 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2cx6p" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="registry-server" containerID="cri-o://4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2" gracePeriod=2 Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.515398 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.605470 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz6bv\" (UniqueName: \"kubernetes.io/projected/f54c8400-c95f-4b8d-8a34-bee15e7112ad-kube-api-access-cz6bv\") pod \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.605532 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-catalog-content\") pod \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.605590 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-utilities\") pod \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\" (UID: \"f54c8400-c95f-4b8d-8a34-bee15e7112ad\") " Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.606933 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-utilities" (OuterVolumeSpecName: "utilities") pod "f54c8400-c95f-4b8d-8a34-bee15e7112ad" (UID: "f54c8400-c95f-4b8d-8a34-bee15e7112ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.611049 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f54c8400-c95f-4b8d-8a34-bee15e7112ad-kube-api-access-cz6bv" (OuterVolumeSpecName: "kube-api-access-cz6bv") pod "f54c8400-c95f-4b8d-8a34-bee15e7112ad" (UID: "f54c8400-c95f-4b8d-8a34-bee15e7112ad"). InnerVolumeSpecName "kube-api-access-cz6bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.618260 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f54c8400-c95f-4b8d-8a34-bee15e7112ad" (UID: "f54c8400-c95f-4b8d-8a34-bee15e7112ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.708089 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz6bv\" (UniqueName: \"kubernetes.io/projected/f54c8400-c95f-4b8d-8a34-bee15e7112ad-kube-api-access-cz6bv\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.708132 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:14 crc kubenswrapper[4970]: I0930 10:18:14.708144 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f54c8400-c95f-4b8d-8a34-bee15e7112ad-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.026472 4970 generic.go:334] "Generic (PLEG): container finished" podID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerID="4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2" exitCode=0 Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.026599 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cx6p" event={"ID":"f54c8400-c95f-4b8d-8a34-bee15e7112ad","Type":"ContainerDied","Data":"4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2"} Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.026892 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2cx6p" event={"ID":"f54c8400-c95f-4b8d-8a34-bee15e7112ad","Type":"ContainerDied","Data":"6b2b815aae6383d8f7d292cbcae17cd739e54d0bc7f4134650401d116d9389ae"} Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.026932 4970 scope.go:117] "RemoveContainer" containerID="4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.026648 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2cx6p" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.066222 4970 scope.go:117] "RemoveContainer" containerID="d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.073747 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cx6p"] Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.087223 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2cx6p"] Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.093198 4970 scope.go:117] "RemoveContainer" containerID="9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.141451 4970 scope.go:117] "RemoveContainer" containerID="4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2" Sep 30 10:18:15 crc kubenswrapper[4970]: E0930 10:18:15.141907 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2\": container with ID starting with 4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2 not found: ID does not exist" containerID="4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.141956 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2"} err="failed to get container status \"4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2\": rpc error: code = NotFound desc = could not find container \"4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2\": container with ID starting with 4ed9aa21506ecb85b7410b86b1708aa9ff1f29228dde9d48f450dec213de6ad2 not found: ID does not exist" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.142006 4970 scope.go:117] "RemoveContainer" containerID="d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe" Sep 30 10:18:15 crc kubenswrapper[4970]: E0930 10:18:15.142343 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe\": container with ID starting with d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe not found: ID does not exist" containerID="d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.142387 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe"} err="failed to get container status \"d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe\": rpc error: code = NotFound desc = could not find container \"d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe\": container with ID starting with d96ffb34060c790698def07c9f1fcff9e01f2fd1634a3476671354b8351027fe not found: ID does not exist" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.142453 4970 scope.go:117] "RemoveContainer" containerID="9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2" Sep 30 10:18:15 crc kubenswrapper[4970]: E0930 10:18:15.142722 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2\": container with ID starting with 9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2 not found: ID does not exist" containerID="9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.142754 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2"} err="failed to get container status \"9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2\": rpc error: code = NotFound desc = could not find container \"9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2\": container with ID starting with 9cefb68e10428b5a5438d989709eaba0b7e06995d7ea65f257277e28cbea22c2 not found: ID does not exist" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.316282 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9pfx"] Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.316536 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9pfx" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="registry-server" containerID="cri-o://c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617" gracePeriod=2 Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.679688 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" path="/var/lib/kubelet/pods/f54c8400-c95f-4b8d-8a34-bee15e7112ad/volumes" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.818767 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.933819 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-catalog-content\") pod \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.933893 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26889\" (UniqueName: \"kubernetes.io/projected/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-kube-api-access-26889\") pod \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.934104 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-utilities\") pod \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\" (UID: \"4d1e31e9-3686-4a19-820c-bbed54c8c5a4\") " Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.935235 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-utilities" (OuterVolumeSpecName: "utilities") pod "4d1e31e9-3686-4a19-820c-bbed54c8c5a4" (UID: "4d1e31e9-3686-4a19-820c-bbed54c8c5a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.940127 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-kube-api-access-26889" (OuterVolumeSpecName: "kube-api-access-26889") pod "4d1e31e9-3686-4a19-820c-bbed54c8c5a4" (UID: "4d1e31e9-3686-4a19-820c-bbed54c8c5a4"). InnerVolumeSpecName "kube-api-access-26889". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:18:15 crc kubenswrapper[4970]: I0930 10:18:15.978898 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4d1e31e9-3686-4a19-820c-bbed54c8c5a4" (UID: "4d1e31e9-3686-4a19-820c-bbed54c8c5a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.036702 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.036749 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26889\" (UniqueName: \"kubernetes.io/projected/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-kube-api-access-26889\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.036770 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4d1e31e9-3686-4a19-820c-bbed54c8c5a4-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.042183 4970 generic.go:334] "Generic (PLEG): container finished" podID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerID="c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617" exitCode=0 Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.042231 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9pfx" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.042243 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerDied","Data":"c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617"} Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.042280 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9pfx" event={"ID":"4d1e31e9-3686-4a19-820c-bbed54c8c5a4","Type":"ContainerDied","Data":"d957c4edfe67cda20a4e29cf9397606e28b751e4797abf3651f4c40a843b742e"} Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.042309 4970 scope.go:117] "RemoveContainer" containerID="c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.063347 4970 scope.go:117] "RemoveContainer" containerID="fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.082995 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9pfx"] Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.091058 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9pfx"] Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.109262 4970 scope.go:117] "RemoveContainer" containerID="127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.124727 4970 scope.go:117] "RemoveContainer" containerID="c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617" Sep 30 10:18:16 crc kubenswrapper[4970]: E0930 10:18:16.125421 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617\": container with ID starting with c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617 not found: ID does not exist" containerID="c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.125484 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617"} err="failed to get container status \"c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617\": rpc error: code = NotFound desc = could not find container \"c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617\": container with ID starting with c6af4482d241b2deb055be185eecacbdf0f02c267e0853b211c7ac1fca7d1617 not found: ID does not exist" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.125518 4970 scope.go:117] "RemoveContainer" containerID="fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e" Sep 30 10:18:16 crc kubenswrapper[4970]: E0930 10:18:16.125979 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e\": container with ID starting with fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e not found: ID does not exist" containerID="fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.126040 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e"} err="failed to get container status \"fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e\": rpc error: code = NotFound desc = could not find container \"fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e\": container with ID starting with fcd79429c1e3387e47e776e84e7e9cf06c77d8e74208910128f5e757e6fb759e not found: ID does not exist" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.126066 4970 scope.go:117] "RemoveContainer" containerID="127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0" Sep 30 10:18:16 crc kubenswrapper[4970]: E0930 10:18:16.126383 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0\": container with ID starting with 127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0 not found: ID does not exist" containerID="127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0" Sep 30 10:18:16 crc kubenswrapper[4970]: I0930 10:18:16.126409 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0"} err="failed to get container status \"127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0\": rpc error: code = NotFound desc = could not find container \"127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0\": container with ID starting with 127ea5e65288d492324f6d925d64d59daae721933a9380bbff324334a22f94b0 not found: ID does not exist" Sep 30 10:18:17 crc kubenswrapper[4970]: I0930 10:18:17.679582 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" path="/var/lib/kubelet/pods/4d1e31e9-3686-4a19-820c-bbed54c8c5a4/volumes" Sep 30 10:18:23 crc kubenswrapper[4970]: I0930 10:18:23.669059 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:18:23 crc kubenswrapper[4970]: E0930 10:18:23.670081 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:18:34 crc kubenswrapper[4970]: I0930 10:18:34.668964 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:18:34 crc kubenswrapper[4970]: E0930 10:18:34.669906 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:18:49 crc kubenswrapper[4970]: I0930 10:18:49.669183 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:18:49 crc kubenswrapper[4970]: E0930 10:18:49.670098 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:18:58 crc kubenswrapper[4970]: I0930 10:18:58.491728 4970 generic.go:334] "Generic (PLEG): container finished" podID="be128dee-e0c7-4db4-a760-b03c3b8d263d" containerID="1e6b39c7ad8d6daa37847f6745796bcbe440cb11a953121fea1547fd9e699a22" exitCode=0 Sep 30 10:18:58 crc kubenswrapper[4970]: I0930 10:18:58.491822 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" event={"ID":"be128dee-e0c7-4db4-a760-b03c3b8d263d","Type":"ContainerDied","Data":"1e6b39c7ad8d6daa37847f6745796bcbe440cb11a953121fea1547fd9e699a22"} Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.887889 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.911386 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-ssh-key\") pod \"be128dee-e0c7-4db4-a760-b03c3b8d263d\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.911458 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-ovn-metadata-agent-neutron-config-0\") pod \"be128dee-e0c7-4db4-a760-b03c3b8d263d\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.911536 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-nova-metadata-neutron-config-0\") pod \"be128dee-e0c7-4db4-a760-b03c3b8d263d\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.911642 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-inventory\") pod \"be128dee-e0c7-4db4-a760-b03c3b8d263d\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.911737 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhvlt\" (UniqueName: \"kubernetes.io/projected/be128dee-e0c7-4db4-a760-b03c3b8d263d-kube-api-access-dhvlt\") pod \"be128dee-e0c7-4db4-a760-b03c3b8d263d\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.911771 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-metadata-combined-ca-bundle\") pod \"be128dee-e0c7-4db4-a760-b03c3b8d263d\" (UID: \"be128dee-e0c7-4db4-a760-b03c3b8d263d\") " Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.924160 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "be128dee-e0c7-4db4-a760-b03c3b8d263d" (UID: "be128dee-e0c7-4db4-a760-b03c3b8d263d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.924921 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be128dee-e0c7-4db4-a760-b03c3b8d263d-kube-api-access-dhvlt" (OuterVolumeSpecName: "kube-api-access-dhvlt") pod "be128dee-e0c7-4db4-a760-b03c3b8d263d" (UID: "be128dee-e0c7-4db4-a760-b03c3b8d263d"). InnerVolumeSpecName "kube-api-access-dhvlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.942505 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "be128dee-e0c7-4db4-a760-b03c3b8d263d" (UID: "be128dee-e0c7-4db4-a760-b03c3b8d263d"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.954375 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-inventory" (OuterVolumeSpecName: "inventory") pod "be128dee-e0c7-4db4-a760-b03c3b8d263d" (UID: "be128dee-e0c7-4db4-a760-b03c3b8d263d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.960922 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "be128dee-e0c7-4db4-a760-b03c3b8d263d" (UID: "be128dee-e0c7-4db4-a760-b03c3b8d263d"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:18:59 crc kubenswrapper[4970]: I0930 10:18:59.974696 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be128dee-e0c7-4db4-a760-b03c3b8d263d" (UID: "be128dee-e0c7-4db4-a760-b03c3b8d263d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.014792 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhvlt\" (UniqueName: \"kubernetes.io/projected/be128dee-e0c7-4db4-a760-b03c3b8d263d-kube-api-access-dhvlt\") on node \"crc\" DevicePath \"\"" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.014838 4970 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.014850 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.014859 4970 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.014871 4970 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.014879 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be128dee-e0c7-4db4-a760-b03c3b8d263d-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.514691 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" event={"ID":"be128dee-e0c7-4db4-a760-b03c3b8d263d","Type":"ContainerDied","Data":"1c152d3bb37eb902c963bacd99a1235ec79166f171b3b0aacaffd89d194d421f"} Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.515322 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c152d3bb37eb902c963bacd99a1235ec79166f171b3b0aacaffd89d194d421f" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.514792 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634182 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst"] Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634604 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="extract-content" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634629 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="extract-content" Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634656 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="extract-content" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634665 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="extract-content" Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634680 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="registry-server" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634689 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="registry-server" Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634715 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="extract-utilities" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634724 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="extract-utilities" Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634747 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="extract-utilities" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634756 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="extract-utilities" Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634773 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be128dee-e0c7-4db4-a760-b03c3b8d263d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634783 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="be128dee-e0c7-4db4-a760-b03c3b8d263d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 30 10:19:00 crc kubenswrapper[4970]: E0930 10:19:00.634790 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="registry-server" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.634797 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="registry-server" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.635038 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="be128dee-e0c7-4db4-a760-b03c3b8d263d" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.635063 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d1e31e9-3686-4a19-820c-bbed54c8c5a4" containerName="registry-server" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.635075 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="f54c8400-c95f-4b8d-8a34-bee15e7112ad" containerName="registry-server" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.635838 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.638194 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.638234 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.639270 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.639746 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.640771 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.648405 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst"] Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.656238 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.656370 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.656445 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.656475 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2qd4\" (UniqueName: \"kubernetes.io/projected/109a756f-75b7-4ce1-a45f-3363d2d4097e-kube-api-access-x2qd4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.656498 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.758548 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.758666 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2qd4\" (UniqueName: \"kubernetes.io/projected/109a756f-75b7-4ce1-a45f-3363d2d4097e-kube-api-access-x2qd4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.758705 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.758730 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.758910 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.763661 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.763939 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.764319 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.765242 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.790146 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2qd4\" (UniqueName: \"kubernetes.io/projected/109a756f-75b7-4ce1-a45f-3363d2d4097e-kube-api-access-x2qd4\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-jgcst\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:00 crc kubenswrapper[4970]: I0930 10:19:00.954218 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:19:01 crc kubenswrapper[4970]: I0930 10:19:01.545194 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst"] Sep 30 10:19:01 crc kubenswrapper[4970]: I0930 10:19:01.668663 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:19:01 crc kubenswrapper[4970]: E0930 10:19:01.668918 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:19:02 crc kubenswrapper[4970]: I0930 10:19:02.535167 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" event={"ID":"109a756f-75b7-4ce1-a45f-3363d2d4097e","Type":"ContainerStarted","Data":"dde16e2a05f156ce7717dc384ac617da7fc4a2f1f6b132e86eb6e9935fff5944"} Sep 30 10:19:02 crc kubenswrapper[4970]: I0930 10:19:02.535527 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" event={"ID":"109a756f-75b7-4ce1-a45f-3363d2d4097e","Type":"ContainerStarted","Data":"26e095224836e8b3d2ae5d93d746cc43530eee04ec22670701ae8a6ccd7d46ae"} Sep 30 10:19:02 crc kubenswrapper[4970]: I0930 10:19:02.568737 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" podStartSLOduration=2.077357823 podStartE2EDuration="2.568716434s" podCreationTimestamp="2025-09-30 10:19:00 +0000 UTC" firstStartedPulling="2025-09-30 10:19:01.552622925 +0000 UTC m=+1954.624473859" lastFinishedPulling="2025-09-30 10:19:02.043981526 +0000 UTC m=+1955.115832470" observedRunningTime="2025-09-30 10:19:02.56054437 +0000 UTC m=+1955.632395334" watchObservedRunningTime="2025-09-30 10:19:02.568716434 +0000 UTC m=+1955.640567398" Sep 30 10:19:13 crc kubenswrapper[4970]: I0930 10:19:13.669218 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:19:13 crc kubenswrapper[4970]: E0930 10:19:13.670365 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:19:27 crc kubenswrapper[4970]: I0930 10:19:27.678681 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:19:27 crc kubenswrapper[4970]: E0930 10:19:27.680892 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:19:43 crc kubenswrapper[4970]: I0930 10:19:43.669219 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:19:43 crc kubenswrapper[4970]: I0930 10:19:43.955096 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"9908c880d37d8bba739f99f7c0b8663d5d7ff5cb7bd21d70d725e2a30dda3ef0"} Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.675238 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-krnpl"] Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.679218 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.697809 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krnpl"] Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.821698 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-catalog-content\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.821940 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk2l7\" (UniqueName: \"kubernetes.io/projected/751b87b9-4eeb-45bd-9609-b0b165a54a7a-kube-api-access-jk2l7\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.821966 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-utilities\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.924128 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jk2l7\" (UniqueName: \"kubernetes.io/projected/751b87b9-4eeb-45bd-9609-b0b165a54a7a-kube-api-access-jk2l7\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.924211 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-utilities\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.924325 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-catalog-content\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.925173 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-utilities\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.925195 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-catalog-content\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:12 crc kubenswrapper[4970]: I0930 10:21:12.942622 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jk2l7\" (UniqueName: \"kubernetes.io/projected/751b87b9-4eeb-45bd-9609-b0b165a54a7a-kube-api-access-jk2l7\") pod \"community-operators-krnpl\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:13 crc kubenswrapper[4970]: I0930 10:21:13.002827 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:13 crc kubenswrapper[4970]: I0930 10:21:13.572713 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-krnpl"] Sep 30 10:21:13 crc kubenswrapper[4970]: I0930 10:21:13.907692 4970 generic.go:334] "Generic (PLEG): container finished" podID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerID="54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019" exitCode=0 Sep 30 10:21:13 crc kubenswrapper[4970]: I0930 10:21:13.907743 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krnpl" event={"ID":"751b87b9-4eeb-45bd-9609-b0b165a54a7a","Type":"ContainerDied","Data":"54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019"} Sep 30 10:21:13 crc kubenswrapper[4970]: I0930 10:21:13.907773 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krnpl" event={"ID":"751b87b9-4eeb-45bd-9609-b0b165a54a7a","Type":"ContainerStarted","Data":"ee9774db93e2098a185a20af619b3d9235a0292f6adf66ca4cf3f53331107da4"} Sep 30 10:21:15 crc kubenswrapper[4970]: I0930 10:21:15.935065 4970 generic.go:334] "Generic (PLEG): container finished" podID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerID="eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0" exitCode=0 Sep 30 10:21:15 crc kubenswrapper[4970]: I0930 10:21:15.935185 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krnpl" event={"ID":"751b87b9-4eeb-45bd-9609-b0b165a54a7a","Type":"ContainerDied","Data":"eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0"} Sep 30 10:21:16 crc kubenswrapper[4970]: I0930 10:21:16.950513 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krnpl" event={"ID":"751b87b9-4eeb-45bd-9609-b0b165a54a7a","Type":"ContainerStarted","Data":"95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815"} Sep 30 10:21:16 crc kubenswrapper[4970]: I0930 10:21:16.985610 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-krnpl" podStartSLOduration=2.473752993 podStartE2EDuration="4.985581991s" podCreationTimestamp="2025-09-30 10:21:12 +0000 UTC" firstStartedPulling="2025-09-30 10:21:13.910725815 +0000 UTC m=+2086.982576779" lastFinishedPulling="2025-09-30 10:21:16.422554803 +0000 UTC m=+2089.494405777" observedRunningTime="2025-09-30 10:21:16.980126151 +0000 UTC m=+2090.051977145" watchObservedRunningTime="2025-09-30 10:21:16.985581991 +0000 UTC m=+2090.057432965" Sep 30 10:21:23 crc kubenswrapper[4970]: I0930 10:21:23.003622 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:23 crc kubenswrapper[4970]: I0930 10:21:23.004126 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:23 crc kubenswrapper[4970]: I0930 10:21:23.067097 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:24 crc kubenswrapper[4970]: I0930 10:21:24.128400 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:24 crc kubenswrapper[4970]: I0930 10:21:24.191923 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krnpl"] Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.049516 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-krnpl" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="registry-server" containerID="cri-o://95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815" gracePeriod=2 Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.546828 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.708192 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-catalog-content\") pod \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.708321 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jk2l7\" (UniqueName: \"kubernetes.io/projected/751b87b9-4eeb-45bd-9609-b0b165a54a7a-kube-api-access-jk2l7\") pod \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.708415 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-utilities\") pod \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\" (UID: \"751b87b9-4eeb-45bd-9609-b0b165a54a7a\") " Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.710466 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-utilities" (OuterVolumeSpecName: "utilities") pod "751b87b9-4eeb-45bd-9609-b0b165a54a7a" (UID: "751b87b9-4eeb-45bd-9609-b0b165a54a7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.714363 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751b87b9-4eeb-45bd-9609-b0b165a54a7a-kube-api-access-jk2l7" (OuterVolumeSpecName: "kube-api-access-jk2l7") pod "751b87b9-4eeb-45bd-9609-b0b165a54a7a" (UID: "751b87b9-4eeb-45bd-9609-b0b165a54a7a"). InnerVolumeSpecName "kube-api-access-jk2l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.759256 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "751b87b9-4eeb-45bd-9609-b0b165a54a7a" (UID: "751b87b9-4eeb-45bd-9609-b0b165a54a7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.810200 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.810252 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/751b87b9-4eeb-45bd-9609-b0b165a54a7a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:21:26 crc kubenswrapper[4970]: I0930 10:21:26.810266 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jk2l7\" (UniqueName: \"kubernetes.io/projected/751b87b9-4eeb-45bd-9609-b0b165a54a7a-kube-api-access-jk2l7\") on node \"crc\" DevicePath \"\"" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.067877 4970 generic.go:334] "Generic (PLEG): container finished" podID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerID="95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815" exitCode=0 Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.067955 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krnpl" event={"ID":"751b87b9-4eeb-45bd-9609-b0b165a54a7a","Type":"ContainerDied","Data":"95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815"} Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.068066 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-krnpl" event={"ID":"751b87b9-4eeb-45bd-9609-b0b165a54a7a","Type":"ContainerDied","Data":"ee9774db93e2098a185a20af619b3d9235a0292f6adf66ca4cf3f53331107da4"} Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.068110 4970 scope.go:117] "RemoveContainer" containerID="95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.068114 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-krnpl" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.113212 4970 scope.go:117] "RemoveContainer" containerID="eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.124220 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-krnpl"] Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.135121 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-krnpl"] Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.147720 4970 scope.go:117] "RemoveContainer" containerID="54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.182960 4970 scope.go:117] "RemoveContainer" containerID="95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815" Sep 30 10:21:27 crc kubenswrapper[4970]: E0930 10:21:27.183574 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815\": container with ID starting with 95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815 not found: ID does not exist" containerID="95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.183638 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815"} err="failed to get container status \"95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815\": rpc error: code = NotFound desc = could not find container \"95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815\": container with ID starting with 95c1bf83f504fd3bf14ae327cd092ff61f98a9fe91eae24c9336562a0f685815 not found: ID does not exist" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.183680 4970 scope.go:117] "RemoveContainer" containerID="eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0" Sep 30 10:21:27 crc kubenswrapper[4970]: E0930 10:21:27.184148 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0\": container with ID starting with eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0 not found: ID does not exist" containerID="eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.184199 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0"} err="failed to get container status \"eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0\": rpc error: code = NotFound desc = could not find container \"eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0\": container with ID starting with eeecb77c7c8001f98950b2debec39edbaa86c9168767b60a560105efadf927f0 not found: ID does not exist" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.184227 4970 scope.go:117] "RemoveContainer" containerID="54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019" Sep 30 10:21:27 crc kubenswrapper[4970]: E0930 10:21:27.184543 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019\": container with ID starting with 54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019 not found: ID does not exist" containerID="54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.184611 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019"} err="failed to get container status \"54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019\": rpc error: code = NotFound desc = could not find container \"54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019\": container with ID starting with 54a55b7393620dacaf0d32c3200c9478f35b5695f281072cc1c88456d4f75019 not found: ID does not exist" Sep 30 10:21:27 crc kubenswrapper[4970]: I0930 10:21:27.687503 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" path="/var/lib/kubelet/pods/751b87b9-4eeb-45bd-9609-b0b165a54a7a/volumes" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.844312 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x6h4d"] Sep 30 10:21:53 crc kubenswrapper[4970]: E0930 10:21:53.848186 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="extract-content" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.848313 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="extract-content" Sep 30 10:21:53 crc kubenswrapper[4970]: E0930 10:21:53.848406 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="extract-utilities" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.848485 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="extract-utilities" Sep 30 10:21:53 crc kubenswrapper[4970]: E0930 10:21:53.848592 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="registry-server" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.848674 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="registry-server" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.849044 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="751b87b9-4eeb-45bd-9609-b0b165a54a7a" containerName="registry-server" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.850971 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.856075 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6h4d"] Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.986950 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl226\" (UniqueName: \"kubernetes.io/projected/8b58be07-110a-46ca-8115-ec631d899e36-kube-api-access-jl226\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.987046 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-catalog-content\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:53 crc kubenswrapper[4970]: I0930 10:21:53.987325 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-utilities\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.089094 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-utilities\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.089220 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl226\" (UniqueName: \"kubernetes.io/projected/8b58be07-110a-46ca-8115-ec631d899e36-kube-api-access-jl226\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.089299 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-catalog-content\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.089889 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-catalog-content\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.090249 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-utilities\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.114040 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl226\" (UniqueName: \"kubernetes.io/projected/8b58be07-110a-46ca-8115-ec631d899e36-kube-api-access-jl226\") pod \"redhat-operators-x6h4d\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.221574 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:21:54 crc kubenswrapper[4970]: I0930 10:21:54.689793 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x6h4d"] Sep 30 10:21:55 crc kubenswrapper[4970]: I0930 10:21:55.426924 4970 generic.go:334] "Generic (PLEG): container finished" podID="8b58be07-110a-46ca-8115-ec631d899e36" containerID="b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd" exitCode=0 Sep 30 10:21:55 crc kubenswrapper[4970]: I0930 10:21:55.426999 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerDied","Data":"b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd"} Sep 30 10:21:55 crc kubenswrapper[4970]: I0930 10:21:55.427265 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerStarted","Data":"9e45797636f6a64f71dc3961f36ce1cdd9b0db9b78bc4ab41a335b1970352221"} Sep 30 10:21:56 crc kubenswrapper[4970]: I0930 10:21:56.442454 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerStarted","Data":"3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89"} Sep 30 10:21:58 crc kubenswrapper[4970]: I0930 10:21:58.465891 4970 generic.go:334] "Generic (PLEG): container finished" podID="8b58be07-110a-46ca-8115-ec631d899e36" containerID="3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89" exitCode=0 Sep 30 10:21:58 crc kubenswrapper[4970]: I0930 10:21:58.466004 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerDied","Data":"3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89"} Sep 30 10:21:59 crc kubenswrapper[4970]: I0930 10:21:59.478834 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerStarted","Data":"3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f"} Sep 30 10:21:59 crc kubenswrapper[4970]: I0930 10:21:59.513180 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x6h4d" podStartSLOduration=3.017169177 podStartE2EDuration="6.513161658s" podCreationTimestamp="2025-09-30 10:21:53 +0000 UTC" firstStartedPulling="2025-09-30 10:21:55.429344818 +0000 UTC m=+2128.501195772" lastFinishedPulling="2025-09-30 10:21:58.925337309 +0000 UTC m=+2131.997188253" observedRunningTime="2025-09-30 10:21:59.501531579 +0000 UTC m=+2132.573382513" watchObservedRunningTime="2025-09-30 10:21:59.513161658 +0000 UTC m=+2132.585012592" Sep 30 10:22:04 crc kubenswrapper[4970]: I0930 10:22:04.222512 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:22:04 crc kubenswrapper[4970]: I0930 10:22:04.224336 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:22:04 crc kubenswrapper[4970]: I0930 10:22:04.821317 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:22:04 crc kubenswrapper[4970]: I0930 10:22:04.821385 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:22:05 crc kubenswrapper[4970]: I0930 10:22:05.274887 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x6h4d" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="registry-server" probeResult="failure" output=< Sep 30 10:22:05 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Sep 30 10:22:05 crc kubenswrapper[4970]: > Sep 30 10:22:14 crc kubenswrapper[4970]: I0930 10:22:14.272534 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:22:14 crc kubenswrapper[4970]: I0930 10:22:14.338522 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:22:14 crc kubenswrapper[4970]: I0930 10:22:14.512729 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6h4d"] Sep 30 10:22:15 crc kubenswrapper[4970]: I0930 10:22:15.648033 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x6h4d" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="registry-server" containerID="cri-o://3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f" gracePeriod=2 Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.118475 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.278614 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-catalog-content\") pod \"8b58be07-110a-46ca-8115-ec631d899e36\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.278683 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-utilities\") pod \"8b58be07-110a-46ca-8115-ec631d899e36\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.278749 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl226\" (UniqueName: \"kubernetes.io/projected/8b58be07-110a-46ca-8115-ec631d899e36-kube-api-access-jl226\") pod \"8b58be07-110a-46ca-8115-ec631d899e36\" (UID: \"8b58be07-110a-46ca-8115-ec631d899e36\") " Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.279474 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-utilities" (OuterVolumeSpecName: "utilities") pod "8b58be07-110a-46ca-8115-ec631d899e36" (UID: "8b58be07-110a-46ca-8115-ec631d899e36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.283324 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b58be07-110a-46ca-8115-ec631d899e36-kube-api-access-jl226" (OuterVolumeSpecName: "kube-api-access-jl226") pod "8b58be07-110a-46ca-8115-ec631d899e36" (UID: "8b58be07-110a-46ca-8115-ec631d899e36"). InnerVolumeSpecName "kube-api-access-jl226". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.362068 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8b58be07-110a-46ca-8115-ec631d899e36" (UID: "8b58be07-110a-46ca-8115-ec631d899e36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.381253 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.381305 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8b58be07-110a-46ca-8115-ec631d899e36-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.381318 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl226\" (UniqueName: \"kubernetes.io/projected/8b58be07-110a-46ca-8115-ec631d899e36-kube-api-access-jl226\") on node \"crc\" DevicePath \"\"" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.660800 4970 generic.go:334] "Generic (PLEG): container finished" podID="8b58be07-110a-46ca-8115-ec631d899e36" containerID="3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f" exitCode=0 Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.660844 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerDied","Data":"3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f"} Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.660872 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x6h4d" event={"ID":"8b58be07-110a-46ca-8115-ec631d899e36","Type":"ContainerDied","Data":"9e45797636f6a64f71dc3961f36ce1cdd9b0db9b78bc4ab41a335b1970352221"} Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.660890 4970 scope.go:117] "RemoveContainer" containerID="3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.660910 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x6h4d" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.684381 4970 scope.go:117] "RemoveContainer" containerID="3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.707593 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x6h4d"] Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.721056 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x6h4d"] Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.725779 4970 scope.go:117] "RemoveContainer" containerID="b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.770311 4970 scope.go:117] "RemoveContainer" containerID="3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f" Sep 30 10:22:16 crc kubenswrapper[4970]: E0930 10:22:16.770668 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f\": container with ID starting with 3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f not found: ID does not exist" containerID="3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.770706 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f"} err="failed to get container status \"3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f\": rpc error: code = NotFound desc = could not find container \"3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f\": container with ID starting with 3ae0a802ef548be1543d6ba103a43ba67371579f81b75269a66a632ea5193a7f not found: ID does not exist" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.770733 4970 scope.go:117] "RemoveContainer" containerID="3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89" Sep 30 10:22:16 crc kubenswrapper[4970]: E0930 10:22:16.770950 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89\": container with ID starting with 3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89 not found: ID does not exist" containerID="3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.771010 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89"} err="failed to get container status \"3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89\": rpc error: code = NotFound desc = could not find container \"3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89\": container with ID starting with 3f568c3a76f69edc734bae8ff553e71cfa97e3063ce1cd9ce037734d73702b89 not found: ID does not exist" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.771029 4970 scope.go:117] "RemoveContainer" containerID="b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd" Sep 30 10:22:16 crc kubenswrapper[4970]: E0930 10:22:16.771217 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd\": container with ID starting with b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd not found: ID does not exist" containerID="b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd" Sep 30 10:22:16 crc kubenswrapper[4970]: I0930 10:22:16.771242 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd"} err="failed to get container status \"b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd\": rpc error: code = NotFound desc = could not find container \"b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd\": container with ID starting with b3850494802d7d6a1872049ce9df652fd1fd1e6bb1df1eaefaae4df2233654fd not found: ID does not exist" Sep 30 10:22:17 crc kubenswrapper[4970]: I0930 10:22:17.688589 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b58be07-110a-46ca-8115-ec631d899e36" path="/var/lib/kubelet/pods/8b58be07-110a-46ca-8115-ec631d899e36/volumes" Sep 30 10:22:34 crc kubenswrapper[4970]: I0930 10:22:34.821800 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:22:34 crc kubenswrapper[4970]: I0930 10:22:34.822563 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:23:04 crc kubenswrapper[4970]: I0930 10:23:04.821333 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:23:04 crc kubenswrapper[4970]: I0930 10:23:04.823083 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:23:04 crc kubenswrapper[4970]: I0930 10:23:04.823209 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:23:04 crc kubenswrapper[4970]: I0930 10:23:04.825051 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9908c880d37d8bba739f99f7c0b8663d5d7ff5cb7bd21d70d725e2a30dda3ef0"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:23:04 crc kubenswrapper[4970]: I0930 10:23:04.825190 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://9908c880d37d8bba739f99f7c0b8663d5d7ff5cb7bd21d70d725e2a30dda3ef0" gracePeriod=600 Sep 30 10:23:05 crc kubenswrapper[4970]: I0930 10:23:05.158457 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="9908c880d37d8bba739f99f7c0b8663d5d7ff5cb7bd21d70d725e2a30dda3ef0" exitCode=0 Sep 30 10:23:05 crc kubenswrapper[4970]: I0930 10:23:05.158488 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"9908c880d37d8bba739f99f7c0b8663d5d7ff5cb7bd21d70d725e2a30dda3ef0"} Sep 30 10:23:05 crc kubenswrapper[4970]: I0930 10:23:05.158921 4970 scope.go:117] "RemoveContainer" containerID="446397ff6ddc65a9766c07123a31afecbc0bda27250cb0777d8513b2c06f33ac" Sep 30 10:23:06 crc kubenswrapper[4970]: I0930 10:23:06.180676 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf"} Sep 30 10:23:10 crc kubenswrapper[4970]: I0930 10:23:10.232303 4970 generic.go:334] "Generic (PLEG): container finished" podID="109a756f-75b7-4ce1-a45f-3363d2d4097e" containerID="dde16e2a05f156ce7717dc384ac617da7fc4a2f1f6b132e86eb6e9935fff5944" exitCode=0 Sep 30 10:23:10 crc kubenswrapper[4970]: I0930 10:23:10.232373 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" event={"ID":"109a756f-75b7-4ce1-a45f-3363d2d4097e","Type":"ContainerDied","Data":"dde16e2a05f156ce7717dc384ac617da7fc4a2f1f6b132e86eb6e9935fff5944"} Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.685374 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.862760 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-ssh-key\") pod \"109a756f-75b7-4ce1-a45f-3363d2d4097e\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.862818 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-inventory\") pod \"109a756f-75b7-4ce1-a45f-3363d2d4097e\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.862920 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-combined-ca-bundle\") pod \"109a756f-75b7-4ce1-a45f-3363d2d4097e\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.863064 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2qd4\" (UniqueName: \"kubernetes.io/projected/109a756f-75b7-4ce1-a45f-3363d2d4097e-kube-api-access-x2qd4\") pod \"109a756f-75b7-4ce1-a45f-3363d2d4097e\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.863150 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-secret-0\") pod \"109a756f-75b7-4ce1-a45f-3363d2d4097e\" (UID: \"109a756f-75b7-4ce1-a45f-3363d2d4097e\") " Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.877788 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "109a756f-75b7-4ce1-a45f-3363d2d4097e" (UID: "109a756f-75b7-4ce1-a45f-3363d2d4097e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.877940 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109a756f-75b7-4ce1-a45f-3363d2d4097e-kube-api-access-x2qd4" (OuterVolumeSpecName: "kube-api-access-x2qd4") pod "109a756f-75b7-4ce1-a45f-3363d2d4097e" (UID: "109a756f-75b7-4ce1-a45f-3363d2d4097e"). InnerVolumeSpecName "kube-api-access-x2qd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.907410 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "109a756f-75b7-4ce1-a45f-3363d2d4097e" (UID: "109a756f-75b7-4ce1-a45f-3363d2d4097e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.909962 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "109a756f-75b7-4ce1-a45f-3363d2d4097e" (UID: "109a756f-75b7-4ce1-a45f-3363d2d4097e"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.914225 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-inventory" (OuterVolumeSpecName: "inventory") pod "109a756f-75b7-4ce1-a45f-3363d2d4097e" (UID: "109a756f-75b7-4ce1-a45f-3363d2d4097e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.967388 4970 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.967427 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.967440 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.967457 4970 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109a756f-75b7-4ce1-a45f-3363d2d4097e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:23:11 crc kubenswrapper[4970]: I0930 10:23:11.967472 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2qd4\" (UniqueName: \"kubernetes.io/projected/109a756f-75b7-4ce1-a45f-3363d2d4097e-kube-api-access-x2qd4\") on node \"crc\" DevicePath \"\"" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.255230 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" event={"ID":"109a756f-75b7-4ce1-a45f-3363d2d4097e","Type":"ContainerDied","Data":"26e095224836e8b3d2ae5d93d746cc43530eee04ec22670701ae8a6ccd7d46ae"} Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.255582 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26e095224836e8b3d2ae5d93d746cc43530eee04ec22670701ae8a6ccd7d46ae" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.255309 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-jgcst" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.393364 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6"] Sep 30 10:23:12 crc kubenswrapper[4970]: E0930 10:23:12.393872 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="extract-content" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.393907 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="extract-content" Sep 30 10:23:12 crc kubenswrapper[4970]: E0930 10:23:12.393951 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109a756f-75b7-4ce1-a45f-3363d2d4097e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.393965 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="109a756f-75b7-4ce1-a45f-3363d2d4097e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 30 10:23:12 crc kubenswrapper[4970]: E0930 10:23:12.394147 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="registry-server" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.394233 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="registry-server" Sep 30 10:23:12 crc kubenswrapper[4970]: E0930 10:23:12.394278 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="extract-utilities" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.394290 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="extract-utilities" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.394640 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="109a756f-75b7-4ce1-a45f-3363d2d4097e" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.394676 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b58be07-110a-46ca-8115-ec631d899e36" containerName="registry-server" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.395677 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.398392 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.398732 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.398783 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.398984 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.399233 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.399273 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.399562 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.408270 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6"] Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581317 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581577 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581634 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581764 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zxhl\" (UniqueName: \"kubernetes.io/projected/4f461d08-f275-49fd-be5d-3f4198d81343-kube-api-access-8zxhl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581819 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581845 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.581893 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.582142 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4f461d08-f275-49fd-be5d-3f4198d81343-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.582421 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.683764 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zxhl\" (UniqueName: \"kubernetes.io/projected/4f461d08-f275-49fd-be5d-3f4198d81343-kube-api-access-8zxhl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.684094 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.684114 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.684144 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.684254 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4f461d08-f275-49fd-be5d-3f4198d81343-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.685007 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.685076 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.685146 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.685174 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.685274 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4f461d08-f275-49fd-be5d-3f4198d81343-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.689677 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.689941 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.690115 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.691601 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.692035 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.692535 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.708396 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.711531 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zxhl\" (UniqueName: \"kubernetes.io/projected/4f461d08-f275-49fd-be5d-3f4198d81343-kube-api-access-8zxhl\") pod \"nova-edpm-deployment-openstack-edpm-ipam-428p6\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:12 crc kubenswrapper[4970]: I0930 10:23:12.716292 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:23:13 crc kubenswrapper[4970]: I0930 10:23:13.250477 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6"] Sep 30 10:23:13 crc kubenswrapper[4970]: I0930 10:23:13.255657 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:23:13 crc kubenswrapper[4970]: I0930 10:23:13.272668 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" event={"ID":"4f461d08-f275-49fd-be5d-3f4198d81343","Type":"ContainerStarted","Data":"ea44ca5cc0a5720e61487b40b9b5ad0d77f5d54c1046c717eb54d911d8c72ee3"} Sep 30 10:23:14 crc kubenswrapper[4970]: I0930 10:23:14.282408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" event={"ID":"4f461d08-f275-49fd-be5d-3f4198d81343","Type":"ContainerStarted","Data":"8216437784a51b5ff3b37f97a287f3cbae84df120d1d30e2780f56b2a54a4ddc"} Sep 30 10:23:14 crc kubenswrapper[4970]: I0930 10:23:14.311035 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" podStartSLOduration=1.61991152 podStartE2EDuration="2.311016647s" podCreationTimestamp="2025-09-30 10:23:12 +0000 UTC" firstStartedPulling="2025-09-30 10:23:13.255308434 +0000 UTC m=+2206.327159408" lastFinishedPulling="2025-09-30 10:23:13.946413581 +0000 UTC m=+2207.018264535" observedRunningTime="2025-09-30 10:23:14.303448029 +0000 UTC m=+2207.375298993" watchObservedRunningTime="2025-09-30 10:23:14.311016647 +0000 UTC m=+2207.382867581" Sep 30 10:25:34 crc kubenswrapper[4970]: I0930 10:25:34.821127 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:25:34 crc kubenswrapper[4970]: I0930 10:25:34.821661 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:26:04 crc kubenswrapper[4970]: I0930 10:26:04.821187 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:26:04 crc kubenswrapper[4970]: I0930 10:26:04.821653 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:26:21 crc kubenswrapper[4970]: I0930 10:26:21.077347 4970 generic.go:334] "Generic (PLEG): container finished" podID="4f461d08-f275-49fd-be5d-3f4198d81343" containerID="8216437784a51b5ff3b37f97a287f3cbae84df120d1d30e2780f56b2a54a4ddc" exitCode=0 Sep 30 10:26:21 crc kubenswrapper[4970]: I0930 10:26:21.077434 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" event={"ID":"4f461d08-f275-49fd-be5d-3f4198d81343","Type":"ContainerDied","Data":"8216437784a51b5ff3b37f97a287f3cbae84df120d1d30e2780f56b2a54a4ddc"} Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.544789 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693104 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-1\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693204 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-0\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693236 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-0\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693266 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-inventory\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693350 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zxhl\" (UniqueName: \"kubernetes.io/projected/4f461d08-f275-49fd-be5d-3f4198d81343-kube-api-access-8zxhl\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693381 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-combined-ca-bundle\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693421 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4f461d08-f275-49fd-be5d-3f4198d81343-nova-extra-config-0\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693513 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-1\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.693598 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-ssh-key\") pod \"4f461d08-f275-49fd-be5d-3f4198d81343\" (UID: \"4f461d08-f275-49fd-be5d-3f4198d81343\") " Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.701500 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f461d08-f275-49fd-be5d-3f4198d81343-kube-api-access-8zxhl" (OuterVolumeSpecName: "kube-api-access-8zxhl") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "kube-api-access-8zxhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.701503 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.722433 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f461d08-f275-49fd-be5d-3f4198d81343-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.730575 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.730684 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-inventory" (OuterVolumeSpecName: "inventory") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.733350 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.733912 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.734456 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.734836 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "4f461d08-f275-49fd-be5d-3f4198d81343" (UID: "4f461d08-f275-49fd-be5d-3f4198d81343"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.795637 4970 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796720 4970 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796761 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796779 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zxhl\" (UniqueName: \"kubernetes.io/projected/4f461d08-f275-49fd-be5d-3f4198d81343-kube-api-access-8zxhl\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796811 4970 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796826 4970 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/4f461d08-f275-49fd-be5d-3f4198d81343-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796840 4970 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796853 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:22 crc kubenswrapper[4970]: I0930 10:26:22.796873 4970 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/4f461d08-f275-49fd-be5d-3f4198d81343-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.095821 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" event={"ID":"4f461d08-f275-49fd-be5d-3f4198d81343","Type":"ContainerDied","Data":"ea44ca5cc0a5720e61487b40b9b5ad0d77f5d54c1046c717eb54d911d8c72ee3"} Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.095867 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea44ca5cc0a5720e61487b40b9b5ad0d77f5d54c1046c717eb54d911d8c72ee3" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.095942 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-428p6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.204069 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6"] Sep 30 10:26:23 crc kubenswrapper[4970]: E0930 10:26:23.204568 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f461d08-f275-49fd-be5d-3f4198d81343" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.204591 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f461d08-f275-49fd-be5d-3f4198d81343" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.204823 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f461d08-f275-49fd-be5d-3f4198d81343" containerName="nova-edpm-deployment-openstack-edpm-ipam" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.205639 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.211229 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.211579 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.214662 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.214816 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-g6c69" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.214952 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.220878 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6"] Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305409 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305536 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn9jg\" (UniqueName: \"kubernetes.io/projected/54899213-55ca-42b6-8838-e42c962341b6-kube-api-access-dn9jg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305590 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305666 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305692 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305726 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.305749 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.406848 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.406901 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.406980 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn9jg\" (UniqueName: \"kubernetes.io/projected/54899213-55ca-42b6-8838-e42c962341b6-kube-api-access-dn9jg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.407058 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.407136 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.407170 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.407216 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.412673 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.419576 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.425149 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.425306 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.425723 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.426237 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.427171 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn9jg\" (UniqueName: \"kubernetes.io/projected/54899213-55ca-42b6-8838-e42c962341b6-kube-api-access-dn9jg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:23 crc kubenswrapper[4970]: I0930 10:26:23.527141 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:26:24 crc kubenswrapper[4970]: I0930 10:26:24.131262 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6"] Sep 30 10:26:24 crc kubenswrapper[4970]: W0930 10:26:24.131754 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54899213_55ca_42b6_8838_e42c962341b6.slice/crio-e0c32057da3485b3b39f217f819a495f7f612278bbdec518b24d238c6fb9ff96 WatchSource:0}: Error finding container e0c32057da3485b3b39f217f819a495f7f612278bbdec518b24d238c6fb9ff96: Status 404 returned error can't find the container with id e0c32057da3485b3b39f217f819a495f7f612278bbdec518b24d238c6fb9ff96 Sep 30 10:26:25 crc kubenswrapper[4970]: I0930 10:26:25.115377 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" event={"ID":"54899213-55ca-42b6-8838-e42c962341b6","Type":"ContainerStarted","Data":"c322845e3c273c278f8d34c94d1218457e84f40a2aef2e890d86c905f85cb337"} Sep 30 10:26:25 crc kubenswrapper[4970]: I0930 10:26:25.115894 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" event={"ID":"54899213-55ca-42b6-8838-e42c962341b6","Type":"ContainerStarted","Data":"e0c32057da3485b3b39f217f819a495f7f612278bbdec518b24d238c6fb9ff96"} Sep 30 10:26:25 crc kubenswrapper[4970]: I0930 10:26:25.139138 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" podStartSLOduration=1.610132985 podStartE2EDuration="2.139116211s" podCreationTimestamp="2025-09-30 10:26:23 +0000 UTC" firstStartedPulling="2025-09-30 10:26:24.134167949 +0000 UTC m=+2397.206018903" lastFinishedPulling="2025-09-30 10:26:24.663151145 +0000 UTC m=+2397.735002129" observedRunningTime="2025-09-30 10:26:25.13471935 +0000 UTC m=+2398.206570294" watchObservedRunningTime="2025-09-30 10:26:25.139116211 +0000 UTC m=+2398.210967145" Sep 30 10:26:34 crc kubenswrapper[4970]: I0930 10:26:34.821379 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:26:34 crc kubenswrapper[4970]: I0930 10:26:34.822084 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:26:34 crc kubenswrapper[4970]: I0930 10:26:34.822148 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:26:34 crc kubenswrapper[4970]: I0930 10:26:34.823204 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:26:34 crc kubenswrapper[4970]: I0930 10:26:34.823296 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" gracePeriod=600 Sep 30 10:26:34 crc kubenswrapper[4970]: E0930 10:26:34.950928 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:26:35 crc kubenswrapper[4970]: I0930 10:26:35.209495 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" exitCode=0 Sep 30 10:26:35 crc kubenswrapper[4970]: I0930 10:26:35.209584 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf"} Sep 30 10:26:35 crc kubenswrapper[4970]: I0930 10:26:35.209633 4970 scope.go:117] "RemoveContainer" containerID="9908c880d37d8bba739f99f7c0b8663d5d7ff5cb7bd21d70d725e2a30dda3ef0" Sep 30 10:26:35 crc kubenswrapper[4970]: I0930 10:26:35.210744 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:26:35 crc kubenswrapper[4970]: E0930 10:26:35.211461 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:26:49 crc kubenswrapper[4970]: I0930 10:26:49.669175 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:26:49 crc kubenswrapper[4970]: E0930 10:26:49.670707 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:27:01 crc kubenswrapper[4970]: I0930 10:27:01.668463 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:27:01 crc kubenswrapper[4970]: E0930 10:27:01.669207 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:27:14 crc kubenswrapper[4970]: I0930 10:27:14.668914 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:27:14 crc kubenswrapper[4970]: E0930 10:27:14.670502 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:27:27 crc kubenswrapper[4970]: I0930 10:27:27.684410 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:27:27 crc kubenswrapper[4970]: E0930 10:27:27.685234 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:27:40 crc kubenswrapper[4970]: I0930 10:27:40.669108 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:27:40 crc kubenswrapper[4970]: E0930 10:27:40.670374 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:27:51 crc kubenswrapper[4970]: I0930 10:27:51.668710 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:27:51 crc kubenswrapper[4970]: E0930 10:27:51.669486 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:28:04 crc kubenswrapper[4970]: I0930 10:28:04.668599 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:28:04 crc kubenswrapper[4970]: E0930 10:28:04.669479 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:28:17 crc kubenswrapper[4970]: I0930 10:28:17.676499 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:28:17 crc kubenswrapper[4970]: E0930 10:28:17.677851 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:28:25 crc kubenswrapper[4970]: I0930 10:28:25.829240 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w5z9m"] Sep 30 10:28:25 crc kubenswrapper[4970]: I0930 10:28:25.832984 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:25 crc kubenswrapper[4970]: I0930 10:28:25.843613 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5z9m"] Sep 30 10:28:25 crc kubenswrapper[4970]: I0930 10:28:25.980321 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-utilities\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:25 crc kubenswrapper[4970]: I0930 10:28:25.980443 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrkfm\" (UniqueName: \"kubernetes.io/projected/d663870c-c18a-4444-bba2-104ecce27a7a-kube-api-access-hrkfm\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:25 crc kubenswrapper[4970]: I0930 10:28:25.980629 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-catalog-content\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.083349 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-catalog-content\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.083684 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-catalog-content\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.083778 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-utilities\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.083905 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrkfm\" (UniqueName: \"kubernetes.io/projected/d663870c-c18a-4444-bba2-104ecce27a7a-kube-api-access-hrkfm\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.084275 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-utilities\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.106034 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrkfm\" (UniqueName: \"kubernetes.io/projected/d663870c-c18a-4444-bba2-104ecce27a7a-kube-api-access-hrkfm\") pod \"certified-operators-w5z9m\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.193974 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:26 crc kubenswrapper[4970]: I0930 10:28:26.746557 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w5z9m"] Sep 30 10:28:26 crc kubenswrapper[4970]: W0930 10:28:26.750839 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd663870c_c18a_4444_bba2_104ecce27a7a.slice/crio-0d26e7cadbda0977304912e8e91929a2c99d30f631773c331b4555677531258a WatchSource:0}: Error finding container 0d26e7cadbda0977304912e8e91929a2c99d30f631773c331b4555677531258a: Status 404 returned error can't find the container with id 0d26e7cadbda0977304912e8e91929a2c99d30f631773c331b4555677531258a Sep 30 10:28:27 crc kubenswrapper[4970]: I0930 10:28:27.412302 4970 generic.go:334] "Generic (PLEG): container finished" podID="d663870c-c18a-4444-bba2-104ecce27a7a" containerID="0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c" exitCode=0 Sep 30 10:28:27 crc kubenswrapper[4970]: I0930 10:28:27.412354 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerDied","Data":"0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c"} Sep 30 10:28:27 crc kubenswrapper[4970]: I0930 10:28:27.412382 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerStarted","Data":"0d26e7cadbda0977304912e8e91929a2c99d30f631773c331b4555677531258a"} Sep 30 10:28:27 crc kubenswrapper[4970]: I0930 10:28:27.417619 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:28:29 crc kubenswrapper[4970]: I0930 10:28:29.446981 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerStarted","Data":"ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2"} Sep 30 10:28:30 crc kubenswrapper[4970]: I0930 10:28:30.464631 4970 generic.go:334] "Generic (PLEG): container finished" podID="d663870c-c18a-4444-bba2-104ecce27a7a" containerID="ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2" exitCode=0 Sep 30 10:28:30 crc kubenswrapper[4970]: I0930 10:28:30.464703 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerDied","Data":"ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2"} Sep 30 10:28:31 crc kubenswrapper[4970]: I0930 10:28:31.474486 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerStarted","Data":"fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d"} Sep 30 10:28:31 crc kubenswrapper[4970]: I0930 10:28:31.493748 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w5z9m" podStartSLOduration=2.896309637 podStartE2EDuration="6.493730333s" podCreationTimestamp="2025-09-30 10:28:25 +0000 UTC" firstStartedPulling="2025-09-30 10:28:27.417355388 +0000 UTC m=+2520.489206322" lastFinishedPulling="2025-09-30 10:28:31.014776084 +0000 UTC m=+2524.086627018" observedRunningTime="2025-09-30 10:28:31.489547728 +0000 UTC m=+2524.561398672" watchObservedRunningTime="2025-09-30 10:28:31.493730333 +0000 UTC m=+2524.565581267" Sep 30 10:28:31 crc kubenswrapper[4970]: I0930 10:28:31.669015 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:28:31 crc kubenswrapper[4970]: E0930 10:28:31.669223 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:28:36 crc kubenswrapper[4970]: I0930 10:28:36.195411 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:36 crc kubenswrapper[4970]: I0930 10:28:36.195821 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:36 crc kubenswrapper[4970]: I0930 10:28:36.272030 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:36 crc kubenswrapper[4970]: I0930 10:28:36.574940 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:36 crc kubenswrapper[4970]: I0930 10:28:36.629430 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5z9m"] Sep 30 10:28:38 crc kubenswrapper[4970]: I0930 10:28:38.539960 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w5z9m" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="registry-server" containerID="cri-o://fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d" gracePeriod=2 Sep 30 10:28:38 crc kubenswrapper[4970]: I0930 10:28:38.958571 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.154667 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrkfm\" (UniqueName: \"kubernetes.io/projected/d663870c-c18a-4444-bba2-104ecce27a7a-kube-api-access-hrkfm\") pod \"d663870c-c18a-4444-bba2-104ecce27a7a\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.155707 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-utilities\") pod \"d663870c-c18a-4444-bba2-104ecce27a7a\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.155743 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-catalog-content\") pod \"d663870c-c18a-4444-bba2-104ecce27a7a\" (UID: \"d663870c-c18a-4444-bba2-104ecce27a7a\") " Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.156919 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-utilities" (OuterVolumeSpecName: "utilities") pod "d663870c-c18a-4444-bba2-104ecce27a7a" (UID: "d663870c-c18a-4444-bba2-104ecce27a7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.162137 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d663870c-c18a-4444-bba2-104ecce27a7a-kube-api-access-hrkfm" (OuterVolumeSpecName: "kube-api-access-hrkfm") pod "d663870c-c18a-4444-bba2-104ecce27a7a" (UID: "d663870c-c18a-4444-bba2-104ecce27a7a"). InnerVolumeSpecName "kube-api-access-hrkfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.257814 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.258105 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrkfm\" (UniqueName: \"kubernetes.io/projected/d663870c-c18a-4444-bba2-104ecce27a7a-kube-api-access-hrkfm\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.452283 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d663870c-c18a-4444-bba2-104ecce27a7a" (UID: "d663870c-c18a-4444-bba2-104ecce27a7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.461932 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d663870c-c18a-4444-bba2-104ecce27a7a-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.554871 4970 generic.go:334] "Generic (PLEG): container finished" podID="d663870c-c18a-4444-bba2-104ecce27a7a" containerID="fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d" exitCode=0 Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.554928 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerDied","Data":"fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d"} Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.554969 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w5z9m" event={"ID":"d663870c-c18a-4444-bba2-104ecce27a7a","Type":"ContainerDied","Data":"0d26e7cadbda0977304912e8e91929a2c99d30f631773c331b4555677531258a"} Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.555034 4970 scope.go:117] "RemoveContainer" containerID="fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.555233 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w5z9m" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.595668 4970 scope.go:117] "RemoveContainer" containerID="ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.606451 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w5z9m"] Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.615527 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w5z9m"] Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.630354 4970 scope.go:117] "RemoveContainer" containerID="0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.681559 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" path="/var/lib/kubelet/pods/d663870c-c18a-4444-bba2-104ecce27a7a/volumes" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.682119 4970 scope.go:117] "RemoveContainer" containerID="fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d" Sep 30 10:28:39 crc kubenswrapper[4970]: E0930 10:28:39.689550 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d\": container with ID starting with fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d not found: ID does not exist" containerID="fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.689607 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d"} err="failed to get container status \"fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d\": rpc error: code = NotFound desc = could not find container \"fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d\": container with ID starting with fdea581467aecba74640ed6535cdc2995fd104821c237419485d769c1d66215d not found: ID does not exist" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.689641 4970 scope.go:117] "RemoveContainer" containerID="ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2" Sep 30 10:28:39 crc kubenswrapper[4970]: E0930 10:28:39.690148 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2\": container with ID starting with ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2 not found: ID does not exist" containerID="ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.690177 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2"} err="failed to get container status \"ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2\": rpc error: code = NotFound desc = could not find container \"ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2\": container with ID starting with ad573d5634856c9cb06ac6829de0de3862acbd543a03b906d77bda9ef2d275e2 not found: ID does not exist" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.690201 4970 scope.go:117] "RemoveContainer" containerID="0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c" Sep 30 10:28:39 crc kubenswrapper[4970]: E0930 10:28:39.691018 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c\": container with ID starting with 0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c not found: ID does not exist" containerID="0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c" Sep 30 10:28:39 crc kubenswrapper[4970]: I0930 10:28:39.691052 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c"} err="failed to get container status \"0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c\": rpc error: code = NotFound desc = could not find container \"0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c\": container with ID starting with 0be5a81ebcf8beb0896bd23fda0fb8472e4ceaf73f593ea4b64edec43bd7ca3c not found: ID does not exist" Sep 30 10:28:46 crc kubenswrapper[4970]: I0930 10:28:46.669657 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:28:46 crc kubenswrapper[4970]: E0930 10:28:46.670435 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:28:47 crc kubenswrapper[4970]: I0930 10:28:47.657445 4970 generic.go:334] "Generic (PLEG): container finished" podID="54899213-55ca-42b6-8838-e42c962341b6" containerID="c322845e3c273c278f8d34c94d1218457e84f40a2aef2e890d86c905f85cb337" exitCode=0 Sep 30 10:28:47 crc kubenswrapper[4970]: I0930 10:28:47.657557 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" event={"ID":"54899213-55ca-42b6-8838-e42c962341b6","Type":"ContainerDied","Data":"c322845e3c273c278f8d34c94d1218457e84f40a2aef2e890d86c905f85cb337"} Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.092626 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.153902 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-2\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.154014 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-1\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.154134 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-telemetry-combined-ca-bundle\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.154186 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-inventory\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.154230 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-0\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.154292 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ssh-key\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.154322 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn9jg\" (UniqueName: \"kubernetes.io/projected/54899213-55ca-42b6-8838-e42c962341b6-kube-api-access-dn9jg\") pod \"54899213-55ca-42b6-8838-e42c962341b6\" (UID: \"54899213-55ca-42b6-8838-e42c962341b6\") " Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.159736 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.160597 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54899213-55ca-42b6-8838-e42c962341b6-kube-api-access-dn9jg" (OuterVolumeSpecName: "kube-api-access-dn9jg") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "kube-api-access-dn9jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.185183 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.192010 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.195711 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.216334 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.219091 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-inventory" (OuterVolumeSpecName: "inventory") pod "54899213-55ca-42b6-8838-e42c962341b6" (UID: "54899213-55ca-42b6-8838-e42c962341b6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.256925 4970 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.256970 4970 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.256985 4970 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.257017 4970 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-inventory\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.257033 4970 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.257045 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/54899213-55ca-42b6-8838-e42c962341b6-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.257058 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn9jg\" (UniqueName: \"kubernetes.io/projected/54899213-55ca-42b6-8838-e42c962341b6-kube-api-access-dn9jg\") on node \"crc\" DevicePath \"\"" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.680314 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.686129 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6" event={"ID":"54899213-55ca-42b6-8838-e42c962341b6","Type":"ContainerDied","Data":"e0c32057da3485b3b39f217f819a495f7f612278bbdec518b24d238c6fb9ff96"} Sep 30 10:28:49 crc kubenswrapper[4970]: I0930 10:28:49.686168 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c32057da3485b3b39f217f819a495f7f612278bbdec518b24d238c6fb9ff96" Sep 30 10:28:52 crc kubenswrapper[4970]: E0930 10:28:52.592032 4970 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.132:42844->38.102.83.132:42257: write tcp 38.102.83.132:42844->38.102.83.132:42257: write: broken pipe Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.466453 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7kg"] Sep 30 10:28:53 crc kubenswrapper[4970]: E0930 10:28:53.467178 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="extract-utilities" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.467191 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="extract-utilities" Sep 30 10:28:53 crc kubenswrapper[4970]: E0930 10:28:53.467205 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="registry-server" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.467214 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="registry-server" Sep 30 10:28:53 crc kubenswrapper[4970]: E0930 10:28:53.467246 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="extract-content" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.467252 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="extract-content" Sep 30 10:28:53 crc kubenswrapper[4970]: E0930 10:28:53.467261 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54899213-55ca-42b6-8838-e42c962341b6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.467267 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="54899213-55ca-42b6-8838-e42c962341b6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.467507 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d663870c-c18a-4444-bba2-104ecce27a7a" containerName="registry-server" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.467535 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="54899213-55ca-42b6-8838-e42c962341b6" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.468916 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.492837 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7kg"] Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.565795 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74zc5\" (UniqueName: \"kubernetes.io/projected/877e2075-3695-4d86-8442-e400d8c9e032-kube-api-access-74zc5\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.566025 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-utilities\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.566323 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-catalog-content\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.668949 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-catalog-content\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.669115 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74zc5\" (UniqueName: \"kubernetes.io/projected/877e2075-3695-4d86-8442-e400d8c9e032-kube-api-access-74zc5\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.669235 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-utilities\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.669583 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-catalog-content\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.669676 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-utilities\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.709180 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74zc5\" (UniqueName: \"kubernetes.io/projected/877e2075-3695-4d86-8442-e400d8c9e032-kube-api-access-74zc5\") pod \"redhat-marketplace-dk7kg\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:53 crc kubenswrapper[4970]: I0930 10:28:53.807412 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:28:54 crc kubenswrapper[4970]: I0930 10:28:54.340951 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7kg"] Sep 30 10:28:54 crc kubenswrapper[4970]: I0930 10:28:54.729860 4970 generic.go:334] "Generic (PLEG): container finished" podID="877e2075-3695-4d86-8442-e400d8c9e032" containerID="e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272" exitCode=0 Sep 30 10:28:54 crc kubenswrapper[4970]: I0930 10:28:54.729956 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7kg" event={"ID":"877e2075-3695-4d86-8442-e400d8c9e032","Type":"ContainerDied","Data":"e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272"} Sep 30 10:28:54 crc kubenswrapper[4970]: I0930 10:28:54.730216 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7kg" event={"ID":"877e2075-3695-4d86-8442-e400d8c9e032","Type":"ContainerStarted","Data":"4cc20d4a780bb944159ff1337c001bda65e9153abbfb308e6defb2ba2aaf05f7"} Sep 30 10:28:55 crc kubenswrapper[4970]: I0930 10:28:55.739714 4970 generic.go:334] "Generic (PLEG): container finished" podID="877e2075-3695-4d86-8442-e400d8c9e032" containerID="a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536" exitCode=0 Sep 30 10:28:55 crc kubenswrapper[4970]: I0930 10:28:55.739775 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7kg" event={"ID":"877e2075-3695-4d86-8442-e400d8c9e032","Type":"ContainerDied","Data":"a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536"} Sep 30 10:28:56 crc kubenswrapper[4970]: I0930 10:28:56.752789 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7kg" event={"ID":"877e2075-3695-4d86-8442-e400d8c9e032","Type":"ContainerStarted","Data":"ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f"} Sep 30 10:28:56 crc kubenswrapper[4970]: I0930 10:28:56.778371 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dk7kg" podStartSLOduration=2.334557586 podStartE2EDuration="3.778343273s" podCreationTimestamp="2025-09-30 10:28:53 +0000 UTC" firstStartedPulling="2025-09-30 10:28:54.732570821 +0000 UTC m=+2547.804421755" lastFinishedPulling="2025-09-30 10:28:56.176356508 +0000 UTC m=+2549.248207442" observedRunningTime="2025-09-30 10:28:56.776459811 +0000 UTC m=+2549.848310735" watchObservedRunningTime="2025-09-30 10:28:56.778343273 +0000 UTC m=+2549.850194217" Sep 30 10:28:59 crc kubenswrapper[4970]: I0930 10:28:59.668923 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:28:59 crc kubenswrapper[4970]: E0930 10:28:59.669652 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:29:03 crc kubenswrapper[4970]: I0930 10:29:03.808526 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:29:03 crc kubenswrapper[4970]: I0930 10:29:03.809110 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:29:03 crc kubenswrapper[4970]: I0930 10:29:03.857944 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:29:04 crc kubenswrapper[4970]: I0930 10:29:04.888916 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:29:04 crc kubenswrapper[4970]: I0930 10:29:04.956972 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7kg"] Sep 30 10:29:06 crc kubenswrapper[4970]: I0930 10:29:06.855192 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dk7kg" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="registry-server" containerID="cri-o://ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f" gracePeriod=2 Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.321790 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.453720 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-catalog-content\") pod \"877e2075-3695-4d86-8442-e400d8c9e032\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.454454 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74zc5\" (UniqueName: \"kubernetes.io/projected/877e2075-3695-4d86-8442-e400d8c9e032-kube-api-access-74zc5\") pod \"877e2075-3695-4d86-8442-e400d8c9e032\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.454596 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-utilities\") pod \"877e2075-3695-4d86-8442-e400d8c9e032\" (UID: \"877e2075-3695-4d86-8442-e400d8c9e032\") " Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.456083 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-utilities" (OuterVolumeSpecName: "utilities") pod "877e2075-3695-4d86-8442-e400d8c9e032" (UID: "877e2075-3695-4d86-8442-e400d8c9e032"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.469337 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877e2075-3695-4d86-8442-e400d8c9e032-kube-api-access-74zc5" (OuterVolumeSpecName: "kube-api-access-74zc5") pod "877e2075-3695-4d86-8442-e400d8c9e032" (UID: "877e2075-3695-4d86-8442-e400d8c9e032"). InnerVolumeSpecName "kube-api-access-74zc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.478246 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "877e2075-3695-4d86-8442-e400d8c9e032" (UID: "877e2075-3695-4d86-8442-e400d8c9e032"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.556757 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74zc5\" (UniqueName: \"kubernetes.io/projected/877e2075-3695-4d86-8442-e400d8c9e032-kube-api-access-74zc5\") on node \"crc\" DevicePath \"\"" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.556788 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.556799 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877e2075-3695-4d86-8442-e400d8c9e032-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.869243 4970 generic.go:334] "Generic (PLEG): container finished" podID="877e2075-3695-4d86-8442-e400d8c9e032" containerID="ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f" exitCode=0 Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.870251 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7kg" event={"ID":"877e2075-3695-4d86-8442-e400d8c9e032","Type":"ContainerDied","Data":"ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f"} Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.870355 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dk7kg" event={"ID":"877e2075-3695-4d86-8442-e400d8c9e032","Type":"ContainerDied","Data":"4cc20d4a780bb944159ff1337c001bda65e9153abbfb308e6defb2ba2aaf05f7"} Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.870473 4970 scope.go:117] "RemoveContainer" containerID="ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.870682 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dk7kg" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.908708 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7kg"] Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.917249 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dk7kg"] Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.918215 4970 scope.go:117] "RemoveContainer" containerID="a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.960289 4970 scope.go:117] "RemoveContainer" containerID="e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.990503 4970 scope.go:117] "RemoveContainer" containerID="ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f" Sep 30 10:29:07 crc kubenswrapper[4970]: E0930 10:29:07.991215 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f\": container with ID starting with ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f not found: ID does not exist" containerID="ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.991268 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f"} err="failed to get container status \"ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f\": rpc error: code = NotFound desc = could not find container \"ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f\": container with ID starting with ca3ccc2968a60c4297fc2b3f5d51625c4df6c0733375dded50d181ac72aec32f not found: ID does not exist" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.991306 4970 scope.go:117] "RemoveContainer" containerID="a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536" Sep 30 10:29:07 crc kubenswrapper[4970]: E0930 10:29:07.991804 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536\": container with ID starting with a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536 not found: ID does not exist" containerID="a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.991906 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536"} err="failed to get container status \"a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536\": rpc error: code = NotFound desc = could not find container \"a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536\": container with ID starting with a115f12e378e1e9a660e7f0b3c0acf165e661cde753bc538ecd7cfa7f2726536 not found: ID does not exist" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.991959 4970 scope.go:117] "RemoveContainer" containerID="e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272" Sep 30 10:29:07 crc kubenswrapper[4970]: E0930 10:29:07.992459 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272\": container with ID starting with e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272 not found: ID does not exist" containerID="e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272" Sep 30 10:29:07 crc kubenswrapper[4970]: I0930 10:29:07.992490 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272"} err="failed to get container status \"e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272\": rpc error: code = NotFound desc = could not find container \"e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272\": container with ID starting with e98ea3da0989619a6b8d253e5ac26cb5da3a2a54ad554ec2644a87b05ceb5272 not found: ID does not exist" Sep 30 10:29:09 crc kubenswrapper[4970]: I0930 10:29:09.679799 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877e2075-3695-4d86-8442-e400d8c9e032" path="/var/lib/kubelet/pods/877e2075-3695-4d86-8442-e400d8c9e032/volumes" Sep 30 10:29:12 crc kubenswrapper[4970]: I0930 10:29:12.669293 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:29:12 crc kubenswrapper[4970]: E0930 10:29:12.669977 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:29:26 crc kubenswrapper[4970]: I0930 10:29:26.668806 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:29:26 crc kubenswrapper[4970]: E0930 10:29:26.669899 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.098823 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Sep 30 10:29:34 crc kubenswrapper[4970]: E0930 10:29:34.101449 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="registry-server" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.101523 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="registry-server" Sep 30 10:29:34 crc kubenswrapper[4970]: E0930 10:29:34.101612 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="extract-utilities" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.101629 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="extract-utilities" Sep 30 10:29:34 crc kubenswrapper[4970]: E0930 10:29:34.101647 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="extract-content" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.101659 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="extract-content" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.102401 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="877e2075-3695-4d86-8442-e400d8c9e032" containerName="registry-server" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.103826 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.107612 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.113304 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.113794 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.115072 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dtgsh" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.120421 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222280 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222552 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-config-data\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222576 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222624 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222667 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv8mj\" (UniqueName: \"kubernetes.io/projected/b39c8562-3dd8-439a-b17d-967859c86ec2-kube-api-access-nv8mj\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222722 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222736 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222755 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.222771 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.324467 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv8mj\" (UniqueName: \"kubernetes.io/projected/b39c8562-3dd8-439a-b17d-967859c86ec2-kube-api-access-nv8mj\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.324609 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.324647 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.324684 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.324720 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.325305 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.325301 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.325594 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.325640 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-config-data\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.325680 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.325784 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.326533 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.326544 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.328497 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-config-data\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.331619 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.332559 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.339413 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.363791 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv8mj\" (UniqueName: \"kubernetes.io/projected/b39c8562-3dd8-439a-b17d-967859c86ec2-kube-api-access-nv8mj\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.389825 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.446627 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Sep 30 10:29:34 crc kubenswrapper[4970]: I0930 10:29:34.892262 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Sep 30 10:29:34 crc kubenswrapper[4970]: W0930 10:29:34.897122 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb39c8562_3dd8_439a_b17d_967859c86ec2.slice/crio-f46c1b2e2485abf9c8600e2de2167688b7f5ebd2b8d2e02f34f654d0bf40a52d WatchSource:0}: Error finding container f46c1b2e2485abf9c8600e2de2167688b7f5ebd2b8d2e02f34f654d0bf40a52d: Status 404 returned error can't find the container with id f46c1b2e2485abf9c8600e2de2167688b7f5ebd2b8d2e02f34f654d0bf40a52d Sep 30 10:29:35 crc kubenswrapper[4970]: I0930 10:29:35.202542 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b39c8562-3dd8-439a-b17d-967859c86ec2","Type":"ContainerStarted","Data":"f46c1b2e2485abf9c8600e2de2167688b7f5ebd2b8d2e02f34f654d0bf40a52d"} Sep 30 10:29:39 crc kubenswrapper[4970]: I0930 10:29:39.669284 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:29:39 crc kubenswrapper[4970]: E0930 10:29:39.670167 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:29:51 crc kubenswrapper[4970]: I0930 10:29:51.668728 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:29:51 crc kubenswrapper[4970]: E0930 10:29:51.670054 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.138784 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97"] Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.142113 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.145125 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.145279 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.157332 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97"] Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.241167 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609d4c09-e862-452e-b966-95f6d0b31959-config-volume\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.241204 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609d4c09-e862-452e-b966-95f6d0b31959-secret-volume\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.241228 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6v7\" (UniqueName: \"kubernetes.io/projected/609d4c09-e862-452e-b966-95f6d0b31959-kube-api-access-dw6v7\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.342394 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609d4c09-e862-452e-b966-95f6d0b31959-config-volume\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.342445 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609d4c09-e862-452e-b966-95f6d0b31959-secret-volume\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.342468 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6v7\" (UniqueName: \"kubernetes.io/projected/609d4c09-e862-452e-b966-95f6d0b31959-kube-api-access-dw6v7\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.343970 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609d4c09-e862-452e-b966-95f6d0b31959-config-volume\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.349319 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609d4c09-e862-452e-b966-95f6d0b31959-secret-volume\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.359735 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6v7\" (UniqueName: \"kubernetes.io/projected/609d4c09-e862-452e-b966-95f6d0b31959-kube-api-access-dw6v7\") pod \"collect-profiles-29320470-78z97\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:00 crc kubenswrapper[4970]: I0930 10:30:00.471823 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:02 crc kubenswrapper[4970]: E0930 10:30:02.243563 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Sep 30 10:30:02 crc kubenswrapper[4970]: E0930 10:30:02.244196 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nv8mj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(b39c8562-3dd8-439a-b17d-967859c86ec2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:30:02 crc kubenswrapper[4970]: E0930 10:30:02.246181 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="b39c8562-3dd8-439a-b17d-967859c86ec2" Sep 30 10:30:02 crc kubenswrapper[4970]: I0930 10:30:02.435532 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97"] Sep 30 10:30:02 crc kubenswrapper[4970]: I0930 10:30:02.467198 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" event={"ID":"609d4c09-e862-452e-b966-95f6d0b31959","Type":"ContainerStarted","Data":"473aec4e601ab66f6436c5f59fcc7f01e0d6bfeed3bae4d376ba65dc39d6f378"} Sep 30 10:30:02 crc kubenswrapper[4970]: E0930 10:30:02.468241 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="b39c8562-3dd8-439a-b17d-967859c86ec2" Sep 30 10:30:03 crc kubenswrapper[4970]: I0930 10:30:03.478296 4970 generic.go:334] "Generic (PLEG): container finished" podID="609d4c09-e862-452e-b966-95f6d0b31959" containerID="2ea60799d9b6f3a8a17a9a18203cf0f0568d04db1ede27602832b0e07419aa83" exitCode=0 Sep 30 10:30:03 crc kubenswrapper[4970]: I0930 10:30:03.478464 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" event={"ID":"609d4c09-e862-452e-b966-95f6d0b31959","Type":"ContainerDied","Data":"2ea60799d9b6f3a8a17a9a18203cf0f0568d04db1ede27602832b0e07419aa83"} Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.774781 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.931008 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw6v7\" (UniqueName: \"kubernetes.io/projected/609d4c09-e862-452e-b966-95f6d0b31959-kube-api-access-dw6v7\") pod \"609d4c09-e862-452e-b966-95f6d0b31959\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.931078 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609d4c09-e862-452e-b966-95f6d0b31959-config-volume\") pod \"609d4c09-e862-452e-b966-95f6d0b31959\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.931114 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609d4c09-e862-452e-b966-95f6d0b31959-secret-volume\") pod \"609d4c09-e862-452e-b966-95f6d0b31959\" (UID: \"609d4c09-e862-452e-b966-95f6d0b31959\") " Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.932212 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/609d4c09-e862-452e-b966-95f6d0b31959-config-volume" (OuterVolumeSpecName: "config-volume") pod "609d4c09-e862-452e-b966-95f6d0b31959" (UID: "609d4c09-e862-452e-b966-95f6d0b31959"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.938915 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/609d4c09-e862-452e-b966-95f6d0b31959-kube-api-access-dw6v7" (OuterVolumeSpecName: "kube-api-access-dw6v7") pod "609d4c09-e862-452e-b966-95f6d0b31959" (UID: "609d4c09-e862-452e-b966-95f6d0b31959"). InnerVolumeSpecName "kube-api-access-dw6v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:30:04 crc kubenswrapper[4970]: I0930 10:30:04.940141 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/609d4c09-e862-452e-b966-95f6d0b31959-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "609d4c09-e862-452e-b966-95f6d0b31959" (UID: "609d4c09-e862-452e-b966-95f6d0b31959"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.033183 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw6v7\" (UniqueName: \"kubernetes.io/projected/609d4c09-e862-452e-b966-95f6d0b31959-kube-api-access-dw6v7\") on node \"crc\" DevicePath \"\"" Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.033478 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/609d4c09-e862-452e-b966-95f6d0b31959-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.033490 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/609d4c09-e862-452e-b966-95f6d0b31959-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.502493 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" event={"ID":"609d4c09-e862-452e-b966-95f6d0b31959","Type":"ContainerDied","Data":"473aec4e601ab66f6436c5f59fcc7f01e0d6bfeed3bae4d376ba65dc39d6f378"} Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.502608 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="473aec4e601ab66f6436c5f59fcc7f01e0d6bfeed3bae4d376ba65dc39d6f378" Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.502544 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320470-78z97" Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.856890 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624"] Sep 30 10:30:05 crc kubenswrapper[4970]: I0930 10:30:05.866375 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320425-lj624"] Sep 30 10:30:06 crc kubenswrapper[4970]: I0930 10:30:06.669133 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:30:06 crc kubenswrapper[4970]: E0930 10:30:06.669832 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:30:07 crc kubenswrapper[4970]: I0930 10:30:07.693465 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7535af89-756e-4e84-b9f3-246296ca252e" path="/var/lib/kubelet/pods/7535af89-756e-4e84-b9f3-246296ca252e/volumes" Sep 30 10:30:18 crc kubenswrapper[4970]: I0930 10:30:18.634975 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b39c8562-3dd8-439a-b17d-967859c86ec2","Type":"ContainerStarted","Data":"32913e37741401ee67aa00072758e3166f6d5e3edb2f21026d6c227273950ffd"} Sep 30 10:30:18 crc kubenswrapper[4970]: I0930 10:30:18.664800 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.21145069 podStartE2EDuration="45.664773781s" podCreationTimestamp="2025-09-30 10:29:33 +0000 UTC" firstStartedPulling="2025-09-30 10:29:34.899914112 +0000 UTC m=+2587.971765056" lastFinishedPulling="2025-09-30 10:30:17.353237203 +0000 UTC m=+2630.425088147" observedRunningTime="2025-09-30 10:30:18.660894114 +0000 UTC m=+2631.732745078" watchObservedRunningTime="2025-09-30 10:30:18.664773781 +0000 UTC m=+2631.736624755" Sep 30 10:30:21 crc kubenswrapper[4970]: I0930 10:30:21.669122 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:30:21 crc kubenswrapper[4970]: E0930 10:30:21.669914 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:30:32 crc kubenswrapper[4970]: I0930 10:30:32.668390 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:30:32 crc kubenswrapper[4970]: E0930 10:30:32.670175 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:30:45 crc kubenswrapper[4970]: I0930 10:30:45.669006 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:30:45 crc kubenswrapper[4970]: E0930 10:30:45.669682 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:30:58 crc kubenswrapper[4970]: I0930 10:30:58.669171 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:30:58 crc kubenswrapper[4970]: E0930 10:30:58.669973 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:31:02 crc kubenswrapper[4970]: I0930 10:31:02.175807 4970 scope.go:117] "RemoveContainer" containerID="698f40eeefc86b59c0ffe4a50850243c02f1109302e2cfb18448e76237bb76de" Sep 30 10:31:11 crc kubenswrapper[4970]: I0930 10:31:11.669122 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:31:11 crc kubenswrapper[4970]: E0930 10:31:11.669668 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:31:23 crc kubenswrapper[4970]: I0930 10:31:23.668578 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:31:23 crc kubenswrapper[4970]: E0930 10:31:23.669353 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.190244 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9dd65"] Sep 30 10:31:30 crc kubenswrapper[4970]: E0930 10:31:30.191257 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="609d4c09-e862-452e-b966-95f6d0b31959" containerName="collect-profiles" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.191274 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="609d4c09-e862-452e-b966-95f6d0b31959" containerName="collect-profiles" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.191535 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="609d4c09-e862-452e-b966-95f6d0b31959" containerName="collect-profiles" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.193205 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.199242 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dd65"] Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.283680 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-utilities\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.283892 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j247\" (UniqueName: \"kubernetes.io/projected/70c40629-c88a-4781-8dc2-af8d7449ef3b-kube-api-access-6j247\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.283917 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-catalog-content\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.386897 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6j247\" (UniqueName: \"kubernetes.io/projected/70c40629-c88a-4781-8dc2-af8d7449ef3b-kube-api-access-6j247\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.387046 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-catalog-content\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.387163 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-utilities\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.387642 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-catalog-content\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.387688 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-utilities\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.417404 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6j247\" (UniqueName: \"kubernetes.io/projected/70c40629-c88a-4781-8dc2-af8d7449ef3b-kube-api-access-6j247\") pod \"community-operators-9dd65\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:30 crc kubenswrapper[4970]: I0930 10:31:30.531978 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:31 crc kubenswrapper[4970]: I0930 10:31:31.079827 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9dd65"] Sep 30 10:31:31 crc kubenswrapper[4970]: I0930 10:31:31.381660 4970 generic.go:334] "Generic (PLEG): container finished" podID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerID="647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6" exitCode=0 Sep 30 10:31:31 crc kubenswrapper[4970]: I0930 10:31:31.381710 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dd65" event={"ID":"70c40629-c88a-4781-8dc2-af8d7449ef3b","Type":"ContainerDied","Data":"647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6"} Sep 30 10:31:31 crc kubenswrapper[4970]: I0930 10:31:31.381739 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dd65" event={"ID":"70c40629-c88a-4781-8dc2-af8d7449ef3b","Type":"ContainerStarted","Data":"c64eae19303d462c9ac3592f2dda1cd12029bedfb0a9ff745fb35bd559bee705"} Sep 30 10:31:33 crc kubenswrapper[4970]: I0930 10:31:33.406197 4970 generic.go:334] "Generic (PLEG): container finished" podID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerID="7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730" exitCode=0 Sep 30 10:31:33 crc kubenswrapper[4970]: I0930 10:31:33.406301 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dd65" event={"ID":"70c40629-c88a-4781-8dc2-af8d7449ef3b","Type":"ContainerDied","Data":"7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730"} Sep 30 10:31:34 crc kubenswrapper[4970]: I0930 10:31:34.418248 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dd65" event={"ID":"70c40629-c88a-4781-8dc2-af8d7449ef3b","Type":"ContainerStarted","Data":"b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e"} Sep 30 10:31:34 crc kubenswrapper[4970]: I0930 10:31:34.443205 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9dd65" podStartSLOduration=1.93901233 podStartE2EDuration="4.443180264s" podCreationTimestamp="2025-09-30 10:31:30 +0000 UTC" firstStartedPulling="2025-09-30 10:31:31.383538216 +0000 UTC m=+2704.455389170" lastFinishedPulling="2025-09-30 10:31:33.88770616 +0000 UTC m=+2706.959557104" observedRunningTime="2025-09-30 10:31:34.431500783 +0000 UTC m=+2707.503351747" watchObservedRunningTime="2025-09-30 10:31:34.443180264 +0000 UTC m=+2707.515031228" Sep 30 10:31:35 crc kubenswrapper[4970]: I0930 10:31:35.668947 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:31:36 crc kubenswrapper[4970]: I0930 10:31:36.437407 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"7b16ef9b95181d464ea47f22f44cc7958b9ed2628bb1d598673e1b8eb7c945ed"} Sep 30 10:31:40 crc kubenswrapper[4970]: I0930 10:31:40.533109 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:40 crc kubenswrapper[4970]: I0930 10:31:40.533785 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:40 crc kubenswrapper[4970]: I0930 10:31:40.625104 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:41 crc kubenswrapper[4970]: I0930 10:31:41.557303 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:41 crc kubenswrapper[4970]: I0930 10:31:41.617411 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dd65"] Sep 30 10:31:43 crc kubenswrapper[4970]: I0930 10:31:43.508251 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9dd65" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="registry-server" containerID="cri-o://b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e" gracePeriod=2 Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.050758 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.185659 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-utilities\") pod \"70c40629-c88a-4781-8dc2-af8d7449ef3b\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.186365 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6j247\" (UniqueName: \"kubernetes.io/projected/70c40629-c88a-4781-8dc2-af8d7449ef3b-kube-api-access-6j247\") pod \"70c40629-c88a-4781-8dc2-af8d7449ef3b\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.186425 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-catalog-content\") pod \"70c40629-c88a-4781-8dc2-af8d7449ef3b\" (UID: \"70c40629-c88a-4781-8dc2-af8d7449ef3b\") " Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.186544 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-utilities" (OuterVolumeSpecName: "utilities") pod "70c40629-c88a-4781-8dc2-af8d7449ef3b" (UID: "70c40629-c88a-4781-8dc2-af8d7449ef3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.186917 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.202157 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c40629-c88a-4781-8dc2-af8d7449ef3b-kube-api-access-6j247" (OuterVolumeSpecName: "kube-api-access-6j247") pod "70c40629-c88a-4781-8dc2-af8d7449ef3b" (UID: "70c40629-c88a-4781-8dc2-af8d7449ef3b"). InnerVolumeSpecName "kube-api-access-6j247". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.233346 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70c40629-c88a-4781-8dc2-af8d7449ef3b" (UID: "70c40629-c88a-4781-8dc2-af8d7449ef3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.288362 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6j247\" (UniqueName: \"kubernetes.io/projected/70c40629-c88a-4781-8dc2-af8d7449ef3b-kube-api-access-6j247\") on node \"crc\" DevicePath \"\"" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.288389 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c40629-c88a-4781-8dc2-af8d7449ef3b-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.520491 4970 generic.go:334] "Generic (PLEG): container finished" podID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerID="b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e" exitCode=0 Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.520542 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dd65" event={"ID":"70c40629-c88a-4781-8dc2-af8d7449ef3b","Type":"ContainerDied","Data":"b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e"} Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.520574 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9dd65" event={"ID":"70c40629-c88a-4781-8dc2-af8d7449ef3b","Type":"ContainerDied","Data":"c64eae19303d462c9ac3592f2dda1cd12029bedfb0a9ff745fb35bd559bee705"} Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.520594 4970 scope.go:117] "RemoveContainer" containerID="b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.520613 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9dd65" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.537897 4970 scope.go:117] "RemoveContainer" containerID="7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.558266 4970 scope.go:117] "RemoveContainer" containerID="647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.625539 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9dd65"] Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.632068 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9dd65"] Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.672281 4970 scope.go:117] "RemoveContainer" containerID="b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e" Sep 30 10:31:44 crc kubenswrapper[4970]: E0930 10:31:44.675245 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e\": container with ID starting with b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e not found: ID does not exist" containerID="b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.675296 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e"} err="failed to get container status \"b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e\": rpc error: code = NotFound desc = could not find container \"b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e\": container with ID starting with b848243446220d3b9bcde2d8942aced10ef0b2842f1811f24e1ae2357ceaba9e not found: ID does not exist" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.675324 4970 scope.go:117] "RemoveContainer" containerID="7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730" Sep 30 10:31:44 crc kubenswrapper[4970]: E0930 10:31:44.675746 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730\": container with ID starting with 7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730 not found: ID does not exist" containerID="7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.675778 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730"} err="failed to get container status \"7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730\": rpc error: code = NotFound desc = could not find container \"7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730\": container with ID starting with 7f154037b1645e4b3bdc19d2f7ddb34864ac3bd40ba64eebc8e084c34443a730 not found: ID does not exist" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.675794 4970 scope.go:117] "RemoveContainer" containerID="647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6" Sep 30 10:31:44 crc kubenswrapper[4970]: E0930 10:31:44.676205 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6\": container with ID starting with 647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6 not found: ID does not exist" containerID="647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6" Sep 30 10:31:44 crc kubenswrapper[4970]: I0930 10:31:44.676230 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6"} err="failed to get container status \"647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6\": rpc error: code = NotFound desc = could not find container \"647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6\": container with ID starting with 647d962d2d8b63730a76051a5e31077c4d271dbd7ce0825626a4488f7b118aa6 not found: ID does not exist" Sep 30 10:31:45 crc kubenswrapper[4970]: I0930 10:31:45.689490 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" path="/var/lib/kubelet/pods/70c40629-c88a-4781-8dc2-af8d7449ef3b/volumes" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.862661 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5b2rn"] Sep 30 10:32:20 crc kubenswrapper[4970]: E0930 10:32:20.864425 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="extract-utilities" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.865196 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="extract-utilities" Sep 30 10:32:20 crc kubenswrapper[4970]: E0930 10:32:20.865232 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="extract-content" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.865250 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="extract-content" Sep 30 10:32:20 crc kubenswrapper[4970]: E0930 10:32:20.865275 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="registry-server" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.865292 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="registry-server" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.865681 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c40629-c88a-4781-8dc2-af8d7449ef3b" containerName="registry-server" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.868695 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.889698 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5b2rn"] Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.989424 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v88nn\" (UniqueName: \"kubernetes.io/projected/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-kube-api-access-v88nn\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.989679 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-utilities\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:20 crc kubenswrapper[4970]: I0930 10:32:20.989893 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-catalog-content\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.092116 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-catalog-content\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.092242 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v88nn\" (UniqueName: \"kubernetes.io/projected/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-kube-api-access-v88nn\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.092299 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-utilities\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.092833 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-utilities\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.092848 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-catalog-content\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.115867 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v88nn\" (UniqueName: \"kubernetes.io/projected/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-kube-api-access-v88nn\") pod \"redhat-operators-5b2rn\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.216609 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.746704 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5b2rn"] Sep 30 10:32:21 crc kubenswrapper[4970]: I0930 10:32:21.923947 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2rn" event={"ID":"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa","Type":"ContainerStarted","Data":"aa8550ddf52731f93f96b48c93839f908fb48990d5adf0de52db2430f7927615"} Sep 30 10:32:22 crc kubenswrapper[4970]: I0930 10:32:22.933975 4970 generic.go:334] "Generic (PLEG): container finished" podID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerID="f71b53f5e566e1761456ac9a1ac0eb68204bf55f1c6f312f5f844eb8fe9169b5" exitCode=0 Sep 30 10:32:22 crc kubenswrapper[4970]: I0930 10:32:22.934070 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2rn" event={"ID":"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa","Type":"ContainerDied","Data":"f71b53f5e566e1761456ac9a1ac0eb68204bf55f1c6f312f5f844eb8fe9169b5"} Sep 30 10:32:24 crc kubenswrapper[4970]: I0930 10:32:24.955933 4970 generic.go:334] "Generic (PLEG): container finished" podID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerID="e62aa0d1328286c150c1580a3e1b79d4620d4da4505923b1314ca44596802647" exitCode=0 Sep 30 10:32:24 crc kubenswrapper[4970]: I0930 10:32:24.956291 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2rn" event={"ID":"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa","Type":"ContainerDied","Data":"e62aa0d1328286c150c1580a3e1b79d4620d4da4505923b1314ca44596802647"} Sep 30 10:32:25 crc kubenswrapper[4970]: I0930 10:32:25.973434 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2rn" event={"ID":"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa","Type":"ContainerStarted","Data":"78b05eed728c7b46ac1d389dbaec87d8c1e9e4d00ad2696827cdb2ad4d5161e4"} Sep 30 10:32:25 crc kubenswrapper[4970]: I0930 10:32:25.996221 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5b2rn" podStartSLOduration=3.52303071 podStartE2EDuration="5.996195221s" podCreationTimestamp="2025-09-30 10:32:20 +0000 UTC" firstStartedPulling="2025-09-30 10:32:22.936304976 +0000 UTC m=+2756.008155920" lastFinishedPulling="2025-09-30 10:32:25.409469477 +0000 UTC m=+2758.481320431" observedRunningTime="2025-09-30 10:32:25.993906639 +0000 UTC m=+2759.065757583" watchObservedRunningTime="2025-09-30 10:32:25.996195221 +0000 UTC m=+2759.068046155" Sep 30 10:32:31 crc kubenswrapper[4970]: I0930 10:32:31.217367 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:31 crc kubenswrapper[4970]: I0930 10:32:31.218079 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:31 crc kubenswrapper[4970]: I0930 10:32:31.294516 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:32 crc kubenswrapper[4970]: I0930 10:32:32.124114 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:35 crc kubenswrapper[4970]: I0930 10:32:35.450370 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5b2rn"] Sep 30 10:32:35 crc kubenswrapper[4970]: I0930 10:32:35.451230 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5b2rn" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="registry-server" containerID="cri-o://78b05eed728c7b46ac1d389dbaec87d8c1e9e4d00ad2696827cdb2ad4d5161e4" gracePeriod=2 Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.097638 4970 generic.go:334] "Generic (PLEG): container finished" podID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerID="78b05eed728c7b46ac1d389dbaec87d8c1e9e4d00ad2696827cdb2ad4d5161e4" exitCode=0 Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.097695 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2rn" event={"ID":"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa","Type":"ContainerDied","Data":"78b05eed728c7b46ac1d389dbaec87d8c1e9e4d00ad2696827cdb2ad4d5161e4"} Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.462949 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.577620 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v88nn\" (UniqueName: \"kubernetes.io/projected/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-kube-api-access-v88nn\") pod \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.577694 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-utilities\") pod \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.577826 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-catalog-content\") pod \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\" (UID: \"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa\") " Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.578644 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-utilities" (OuterVolumeSpecName: "utilities") pod "0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" (UID: "0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.589444 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-kube-api-access-v88nn" (OuterVolumeSpecName: "kube-api-access-v88nn") pod "0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" (UID: "0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa"). InnerVolumeSpecName "kube-api-access-v88nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.670492 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" (UID: "0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.680117 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.680154 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v88nn\" (UniqueName: \"kubernetes.io/projected/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-kube-api-access-v88nn\") on node \"crc\" DevicePath \"\"" Sep 30 10:32:36 crc kubenswrapper[4970]: I0930 10:32:36.680173 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.114385 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5b2rn" event={"ID":"0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa","Type":"ContainerDied","Data":"aa8550ddf52731f93f96b48c93839f908fb48990d5adf0de52db2430f7927615"} Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.114963 4970 scope.go:117] "RemoveContainer" containerID="78b05eed728c7b46ac1d389dbaec87d8c1e9e4d00ad2696827cdb2ad4d5161e4" Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.114886 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5b2rn" Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.162705 4970 scope.go:117] "RemoveContainer" containerID="e62aa0d1328286c150c1580a3e1b79d4620d4da4505923b1314ca44596802647" Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.173958 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5b2rn"] Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.181119 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5b2rn"] Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.211767 4970 scope.go:117] "RemoveContainer" containerID="f71b53f5e566e1761456ac9a1ac0eb68204bf55f1c6f312f5f844eb8fe9169b5" Sep 30 10:32:37 crc kubenswrapper[4970]: I0930 10:32:37.697165 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" path="/var/lib/kubelet/pods/0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa/volumes" Sep 30 10:34:04 crc kubenswrapper[4970]: I0930 10:34:04.821794 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:34:04 crc kubenswrapper[4970]: I0930 10:34:04.822718 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:34:34 crc kubenswrapper[4970]: I0930 10:34:34.821703 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:34:34 crc kubenswrapper[4970]: I0930 10:34:34.823096 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:35:04 crc kubenswrapper[4970]: I0930 10:35:04.821147 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:35:04 crc kubenswrapper[4970]: I0930 10:35:04.822113 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:35:04 crc kubenswrapper[4970]: I0930 10:35:04.822160 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:35:04 crc kubenswrapper[4970]: I0930 10:35:04.822819 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7b16ef9b95181d464ea47f22f44cc7958b9ed2628bb1d598673e1b8eb7c945ed"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:35:04 crc kubenswrapper[4970]: I0930 10:35:04.822866 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://7b16ef9b95181d464ea47f22f44cc7958b9ed2628bb1d598673e1b8eb7c945ed" gracePeriod=600 Sep 30 10:35:05 crc kubenswrapper[4970]: I0930 10:35:05.672115 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="7b16ef9b95181d464ea47f22f44cc7958b9ed2628bb1d598673e1b8eb7c945ed" exitCode=0 Sep 30 10:35:05 crc kubenswrapper[4970]: I0930 10:35:05.685525 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"7b16ef9b95181d464ea47f22f44cc7958b9ed2628bb1d598673e1b8eb7c945ed"} Sep 30 10:35:05 crc kubenswrapper[4970]: I0930 10:35:05.685572 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88"} Sep 30 10:35:05 crc kubenswrapper[4970]: I0930 10:35:05.685591 4970 scope.go:117] "RemoveContainer" containerID="83b2ab8adcc59d796537efc14a03953008865167f80ec0b48792373c6732afbf" Sep 30 10:37:34 crc kubenswrapper[4970]: I0930 10:37:34.822136 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:37:34 crc kubenswrapper[4970]: I0930 10:37:34.822834 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:38:04 crc kubenswrapper[4970]: I0930 10:38:04.821425 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:38:04 crc kubenswrapper[4970]: I0930 10:38:04.821838 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:38:34 crc kubenswrapper[4970]: I0930 10:38:34.821514 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:38:34 crc kubenswrapper[4970]: I0930 10:38:34.822300 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:38:34 crc kubenswrapper[4970]: I0930 10:38:34.822398 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:38:34 crc kubenswrapper[4970]: I0930 10:38:34.823677 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:38:34 crc kubenswrapper[4970]: I0930 10:38:34.823845 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" gracePeriod=600 Sep 30 10:38:34 crc kubenswrapper[4970]: E0930 10:38:34.957539 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:38:35 crc kubenswrapper[4970]: I0930 10:38:35.037617 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" exitCode=0 Sep 30 10:38:35 crc kubenswrapper[4970]: I0930 10:38:35.037690 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88"} Sep 30 10:38:35 crc kubenswrapper[4970]: I0930 10:38:35.037777 4970 scope.go:117] "RemoveContainer" containerID="7b16ef9b95181d464ea47f22f44cc7958b9ed2628bb1d598673e1b8eb7c945ed" Sep 30 10:38:35 crc kubenswrapper[4970]: I0930 10:38:35.038828 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:38:35 crc kubenswrapper[4970]: E0930 10:38:35.039396 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:38:48 crc kubenswrapper[4970]: I0930 10:38:48.668301 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:38:48 crc kubenswrapper[4970]: E0930 10:38:48.669089 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:39:00 crc kubenswrapper[4970]: I0930 10:39:00.669249 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:39:00 crc kubenswrapper[4970]: E0930 10:39:00.670185 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.392678 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vcrfm"] Sep 30 10:39:07 crc kubenswrapper[4970]: E0930 10:39:07.394114 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="extract-utilities" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.394138 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="extract-utilities" Sep 30 10:39:07 crc kubenswrapper[4970]: E0930 10:39:07.394195 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="registry-server" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.394206 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="registry-server" Sep 30 10:39:07 crc kubenswrapper[4970]: E0930 10:39:07.394242 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="extract-content" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.394252 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="extract-content" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.394605 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a380f99-f3e0-4b67-a9b7-4ce6abcbe7fa" containerName="registry-server" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.397817 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.403337 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcrfm"] Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.494434 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-utilities\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.494690 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-catalog-content\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.494761 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4zxz\" (UniqueName: \"kubernetes.io/projected/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-kube-api-access-l4zxz\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.596536 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-utilities\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.596654 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-catalog-content\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.596700 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4zxz\" (UniqueName: \"kubernetes.io/projected/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-kube-api-access-l4zxz\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.597153 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-utilities\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.597190 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-catalog-content\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.615218 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4zxz\" (UniqueName: \"kubernetes.io/projected/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-kube-api-access-l4zxz\") pod \"certified-operators-vcrfm\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:07 crc kubenswrapper[4970]: I0930 10:39:07.717474 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:08 crc kubenswrapper[4970]: I0930 10:39:08.296277 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vcrfm"] Sep 30 10:39:08 crc kubenswrapper[4970]: W0930 10:39:08.297873 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3451b32e_a0ea_4885_bdb1_b7dfdaf1f726.slice/crio-f2f3bb9a035f6d33068260dfda1f94256193ce9fee11b71505482ef69b21bdd8 WatchSource:0}: Error finding container f2f3bb9a035f6d33068260dfda1f94256193ce9fee11b71505482ef69b21bdd8: Status 404 returned error can't find the container with id f2f3bb9a035f6d33068260dfda1f94256193ce9fee11b71505482ef69b21bdd8 Sep 30 10:39:08 crc kubenswrapper[4970]: I0930 10:39:08.357721 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerStarted","Data":"f2f3bb9a035f6d33068260dfda1f94256193ce9fee11b71505482ef69b21bdd8"} Sep 30 10:39:09 crc kubenswrapper[4970]: I0930 10:39:09.368825 4970 generic.go:334] "Generic (PLEG): container finished" podID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerID="5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255" exitCode=0 Sep 30 10:39:09 crc kubenswrapper[4970]: I0930 10:39:09.368889 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerDied","Data":"5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255"} Sep 30 10:39:09 crc kubenswrapper[4970]: I0930 10:39:09.371116 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:39:10 crc kubenswrapper[4970]: I0930 10:39:10.382670 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerStarted","Data":"257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca"} Sep 30 10:39:11 crc kubenswrapper[4970]: I0930 10:39:11.395799 4970 generic.go:334] "Generic (PLEG): container finished" podID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerID="257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca" exitCode=0 Sep 30 10:39:11 crc kubenswrapper[4970]: I0930 10:39:11.395871 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerDied","Data":"257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca"} Sep 30 10:39:12 crc kubenswrapper[4970]: I0930 10:39:12.407885 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerStarted","Data":"b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795"} Sep 30 10:39:12 crc kubenswrapper[4970]: I0930 10:39:12.434507 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vcrfm" podStartSLOduration=2.899015468 podStartE2EDuration="5.434486093s" podCreationTimestamp="2025-09-30 10:39:07 +0000 UTC" firstStartedPulling="2025-09-30 10:39:09.370792169 +0000 UTC m=+3162.442643103" lastFinishedPulling="2025-09-30 10:39:11.906262794 +0000 UTC m=+3164.978113728" observedRunningTime="2025-09-30 10:39:12.426597146 +0000 UTC m=+3165.498448110" watchObservedRunningTime="2025-09-30 10:39:12.434486093 +0000 UTC m=+3165.506337017" Sep 30 10:39:12 crc kubenswrapper[4970]: I0930 10:39:12.668671 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:39:12 crc kubenswrapper[4970]: E0930 10:39:12.668974 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:39:17 crc kubenswrapper[4970]: I0930 10:39:17.718444 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:17 crc kubenswrapper[4970]: I0930 10:39:17.719371 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:17 crc kubenswrapper[4970]: I0930 10:39:17.808713 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:18 crc kubenswrapper[4970]: I0930 10:39:18.549696 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:18 crc kubenswrapper[4970]: I0930 10:39:18.613891 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcrfm"] Sep 30 10:39:20 crc kubenswrapper[4970]: I0930 10:39:20.505175 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vcrfm" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="registry-server" containerID="cri-o://b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795" gracePeriod=2 Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.010049 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.178313 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-utilities\") pod \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.178452 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-catalog-content\") pod \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.178502 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4zxz\" (UniqueName: \"kubernetes.io/projected/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-kube-api-access-l4zxz\") pod \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\" (UID: \"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726\") " Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.180411 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-utilities" (OuterVolumeSpecName: "utilities") pod "3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" (UID: "3451b32e-a0ea-4885-bdb1-b7dfdaf1f726"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.186863 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-kube-api-access-l4zxz" (OuterVolumeSpecName: "kube-api-access-l4zxz") pod "3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" (UID: "3451b32e-a0ea-4885-bdb1-b7dfdaf1f726"). InnerVolumeSpecName "kube-api-access-l4zxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.255825 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" (UID: "3451b32e-a0ea-4885-bdb1-b7dfdaf1f726"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.280843 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.280876 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4zxz\" (UniqueName: \"kubernetes.io/projected/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-kube-api-access-l4zxz\") on node \"crc\" DevicePath \"\"" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.280891 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.518641 4970 generic.go:334] "Generic (PLEG): container finished" podID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerID="b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795" exitCode=0 Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.518761 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vcrfm" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.518774 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerDied","Data":"b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795"} Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.520107 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vcrfm" event={"ID":"3451b32e-a0ea-4885-bdb1-b7dfdaf1f726","Type":"ContainerDied","Data":"f2f3bb9a035f6d33068260dfda1f94256193ce9fee11b71505482ef69b21bdd8"} Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.520156 4970 scope.go:117] "RemoveContainer" containerID="b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.560464 4970 scope.go:117] "RemoveContainer" containerID="257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.567324 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vcrfm"] Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.577713 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vcrfm"] Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.606769 4970 scope.go:117] "RemoveContainer" containerID="5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.638206 4970 scope.go:117] "RemoveContainer" containerID="b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795" Sep 30 10:39:21 crc kubenswrapper[4970]: E0930 10:39:21.638687 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795\": container with ID starting with b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795 not found: ID does not exist" containerID="b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.638722 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795"} err="failed to get container status \"b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795\": rpc error: code = NotFound desc = could not find container \"b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795\": container with ID starting with b8918e1df0d5672e63a28d1dbe771a9e284f506c9804a681deefc99b2f100795 not found: ID does not exist" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.638746 4970 scope.go:117] "RemoveContainer" containerID="257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca" Sep 30 10:39:21 crc kubenswrapper[4970]: E0930 10:39:21.640681 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca\": container with ID starting with 257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca not found: ID does not exist" containerID="257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.640710 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca"} err="failed to get container status \"257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca\": rpc error: code = NotFound desc = could not find container \"257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca\": container with ID starting with 257580aa3b280b613bc0639f64e5dc4c1ddd09271d90beb0eaf0579a7b26b7ca not found: ID does not exist" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.640733 4970 scope.go:117] "RemoveContainer" containerID="5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255" Sep 30 10:39:21 crc kubenswrapper[4970]: E0930 10:39:21.641055 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255\": container with ID starting with 5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255 not found: ID does not exist" containerID="5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.641082 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255"} err="failed to get container status \"5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255\": rpc error: code = NotFound desc = could not find container \"5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255\": container with ID starting with 5a55d7abfd812467d6a4bc3f24fc15b40df21ea0f368d476e94474fe3e700255 not found: ID does not exist" Sep 30 10:39:21 crc kubenswrapper[4970]: I0930 10:39:21.679145 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" path="/var/lib/kubelet/pods/3451b32e-a0ea-4885-bdb1-b7dfdaf1f726/volumes" Sep 30 10:39:24 crc kubenswrapper[4970]: I0930 10:39:24.669349 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:39:24 crc kubenswrapper[4970]: E0930 10:39:24.671453 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:39:35 crc kubenswrapper[4970]: I0930 10:39:35.669388 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:39:35 crc kubenswrapper[4970]: E0930 10:39:35.671695 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.596737 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2bg8c"] Sep 30 10:39:36 crc kubenswrapper[4970]: E0930 10:39:36.598083 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="extract-content" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.598342 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="extract-content" Sep 30 10:39:36 crc kubenswrapper[4970]: E0930 10:39:36.598633 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="registry-server" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.598828 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="registry-server" Sep 30 10:39:36 crc kubenswrapper[4970]: E0930 10:39:36.599070 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="extract-utilities" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.599315 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="extract-utilities" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.600635 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="3451b32e-a0ea-4885-bdb1-b7dfdaf1f726" containerName="registry-server" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.604306 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.631720 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bg8c"] Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.711922 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-utilities\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.712412 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz5jj\" (UniqueName: \"kubernetes.io/projected/7d7af408-8f57-4f60-b083-2de4bb4a9707-kube-api-access-vz5jj\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.712480 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-catalog-content\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.814496 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-utilities\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.814615 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz5jj\" (UniqueName: \"kubernetes.io/projected/7d7af408-8f57-4f60-b083-2de4bb4a9707-kube-api-access-vz5jj\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.814650 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-catalog-content\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.815047 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-utilities\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.815105 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-catalog-content\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.835765 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz5jj\" (UniqueName: \"kubernetes.io/projected/7d7af408-8f57-4f60-b083-2de4bb4a9707-kube-api-access-vz5jj\") pod \"redhat-marketplace-2bg8c\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:36 crc kubenswrapper[4970]: I0930 10:39:36.933022 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:37 crc kubenswrapper[4970]: I0930 10:39:37.378746 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bg8c"] Sep 30 10:39:37 crc kubenswrapper[4970]: I0930 10:39:37.687104 4970 generic.go:334] "Generic (PLEG): container finished" podID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerID="9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b" exitCode=0 Sep 30 10:39:37 crc kubenswrapper[4970]: I0930 10:39:37.687170 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerDied","Data":"9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b"} Sep 30 10:39:37 crc kubenswrapper[4970]: I0930 10:39:37.687197 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerStarted","Data":"b2099f67da529faeb07aba6a5d889e81c0083cae16863c9e97d041c693c8067d"} Sep 30 10:39:38 crc kubenswrapper[4970]: I0930 10:39:38.697009 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerStarted","Data":"f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b"} Sep 30 10:39:39 crc kubenswrapper[4970]: I0930 10:39:39.708939 4970 generic.go:334] "Generic (PLEG): container finished" podID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerID="f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b" exitCode=0 Sep 30 10:39:39 crc kubenswrapper[4970]: I0930 10:39:39.709008 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerDied","Data":"f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b"} Sep 30 10:39:40 crc kubenswrapper[4970]: I0930 10:39:40.721924 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerStarted","Data":"dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629"} Sep 30 10:39:40 crc kubenswrapper[4970]: I0930 10:39:40.745383 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2bg8c" podStartSLOduration=2.321517448 podStartE2EDuration="4.745365701s" podCreationTimestamp="2025-09-30 10:39:36 +0000 UTC" firstStartedPulling="2025-09-30 10:39:37.69013653 +0000 UTC m=+3190.761987504" lastFinishedPulling="2025-09-30 10:39:40.113984813 +0000 UTC m=+3193.185835757" observedRunningTime="2025-09-30 10:39:40.741768472 +0000 UTC m=+3193.813619436" watchObservedRunningTime="2025-09-30 10:39:40.745365701 +0000 UTC m=+3193.817216635" Sep 30 10:39:46 crc kubenswrapper[4970]: I0930 10:39:46.933456 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:46 crc kubenswrapper[4970]: I0930 10:39:46.935096 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:47 crc kubenswrapper[4970]: I0930 10:39:47.009895 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:47 crc kubenswrapper[4970]: I0930 10:39:47.828149 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:47 crc kubenswrapper[4970]: I0930 10:39:47.874642 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bg8c"] Sep 30 10:39:49 crc kubenswrapper[4970]: I0930 10:39:49.670409 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:39:49 crc kubenswrapper[4970]: E0930 10:39:49.671064 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:39:49 crc kubenswrapper[4970]: I0930 10:39:49.800452 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2bg8c" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="registry-server" containerID="cri-o://dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629" gracePeriod=2 Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.317075 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.407963 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-catalog-content\") pod \"7d7af408-8f57-4f60-b083-2de4bb4a9707\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.408030 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-utilities\") pod \"7d7af408-8f57-4f60-b083-2de4bb4a9707\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.408130 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz5jj\" (UniqueName: \"kubernetes.io/projected/7d7af408-8f57-4f60-b083-2de4bb4a9707-kube-api-access-vz5jj\") pod \"7d7af408-8f57-4f60-b083-2de4bb4a9707\" (UID: \"7d7af408-8f57-4f60-b083-2de4bb4a9707\") " Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.408878 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-utilities" (OuterVolumeSpecName: "utilities") pod "7d7af408-8f57-4f60-b083-2de4bb4a9707" (UID: "7d7af408-8f57-4f60-b083-2de4bb4a9707"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.414493 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d7af408-8f57-4f60-b083-2de4bb4a9707-kube-api-access-vz5jj" (OuterVolumeSpecName: "kube-api-access-vz5jj") pod "7d7af408-8f57-4f60-b083-2de4bb4a9707" (UID: "7d7af408-8f57-4f60-b083-2de4bb4a9707"). InnerVolumeSpecName "kube-api-access-vz5jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.421146 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d7af408-8f57-4f60-b083-2de4bb4a9707" (UID: "7d7af408-8f57-4f60-b083-2de4bb4a9707"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.510752 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz5jj\" (UniqueName: \"kubernetes.io/projected/7d7af408-8f57-4f60-b083-2de4bb4a9707-kube-api-access-vz5jj\") on node \"crc\" DevicePath \"\"" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.510791 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.510805 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d7af408-8f57-4f60-b083-2de4bb4a9707-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.809444 4970 generic.go:334] "Generic (PLEG): container finished" podID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerID="dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629" exitCode=0 Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.809536 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerDied","Data":"dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629"} Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.809850 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2bg8c" event={"ID":"7d7af408-8f57-4f60-b083-2de4bb4a9707","Type":"ContainerDied","Data":"b2099f67da529faeb07aba6a5d889e81c0083cae16863c9e97d041c693c8067d"} Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.809889 4970 scope.go:117] "RemoveContainer" containerID="dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.809570 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2bg8c" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.846828 4970 scope.go:117] "RemoveContainer" containerID="f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.858005 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bg8c"] Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.865604 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2bg8c"] Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.880293 4970 scope.go:117] "RemoveContainer" containerID="9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.936463 4970 scope.go:117] "RemoveContainer" containerID="dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629" Sep 30 10:39:50 crc kubenswrapper[4970]: E0930 10:39:50.936937 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629\": container with ID starting with dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629 not found: ID does not exist" containerID="dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.936977 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629"} err="failed to get container status \"dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629\": rpc error: code = NotFound desc = could not find container \"dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629\": container with ID starting with dc9714ce77915dccf204b6e3bcc66ec9ee8c2e5359eddba87f0785aecc760629 not found: ID does not exist" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.937020 4970 scope.go:117] "RemoveContainer" containerID="f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b" Sep 30 10:39:50 crc kubenswrapper[4970]: E0930 10:39:50.937331 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b\": container with ID starting with f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b not found: ID does not exist" containerID="f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.937356 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b"} err="failed to get container status \"f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b\": rpc error: code = NotFound desc = could not find container \"f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b\": container with ID starting with f58a29bc07334df4ed8f4d3d69dc7d879ea94d3ac0d1a422ff17616d29d0ad9b not found: ID does not exist" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.937370 4970 scope.go:117] "RemoveContainer" containerID="9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b" Sep 30 10:39:50 crc kubenswrapper[4970]: E0930 10:39:50.937765 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b\": container with ID starting with 9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b not found: ID does not exist" containerID="9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b" Sep 30 10:39:50 crc kubenswrapper[4970]: I0930 10:39:50.937800 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b"} err="failed to get container status \"9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b\": rpc error: code = NotFound desc = could not find container \"9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b\": container with ID starting with 9560b4bdd38bf180ef9abfe3e44383c942b8d7e9dae634e7e65c1afcf54f064b not found: ID does not exist" Sep 30 10:39:51 crc kubenswrapper[4970]: I0930 10:39:51.679090 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" path="/var/lib/kubelet/pods/7d7af408-8f57-4f60-b083-2de4bb4a9707/volumes" Sep 30 10:40:01 crc kubenswrapper[4970]: I0930 10:40:01.669317 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:40:01 crc kubenswrapper[4970]: E0930 10:40:01.671242 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:40:13 crc kubenswrapper[4970]: I0930 10:40:13.668805 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:40:13 crc kubenswrapper[4970]: E0930 10:40:13.669688 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:40:26 crc kubenswrapper[4970]: I0930 10:40:26.669472 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:40:26 crc kubenswrapper[4970]: E0930 10:40:26.670649 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:40:40 crc kubenswrapper[4970]: I0930 10:40:40.667799 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:40:40 crc kubenswrapper[4970]: E0930 10:40:40.668664 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:40:55 crc kubenswrapper[4970]: I0930 10:40:55.668201 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:40:55 crc kubenswrapper[4970]: E0930 10:40:55.668884 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:41:06 crc kubenswrapper[4970]: I0930 10:41:06.668700 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:41:06 crc kubenswrapper[4970]: E0930 10:41:06.669404 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:41:20 crc kubenswrapper[4970]: I0930 10:41:20.668731 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:41:20 crc kubenswrapper[4970]: E0930 10:41:20.669594 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:41:31 crc kubenswrapper[4970]: I0930 10:41:31.668561 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:41:31 crc kubenswrapper[4970]: E0930 10:41:31.669480 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:41:45 crc kubenswrapper[4970]: I0930 10:41:45.668329 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:41:45 crc kubenswrapper[4970]: E0930 10:41:45.669172 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:41:50 crc kubenswrapper[4970]: I0930 10:41:50.106200 4970 generic.go:334] "Generic (PLEG): container finished" podID="b39c8562-3dd8-439a-b17d-967859c86ec2" containerID="32913e37741401ee67aa00072758e3166f6d5e3edb2f21026d6c227273950ffd" exitCode=0 Sep 30 10:41:50 crc kubenswrapper[4970]: I0930 10:41:50.106277 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b39c8562-3dd8-439a-b17d-967859c86ec2","Type":"ContainerDied","Data":"32913e37741401ee67aa00072758e3166f6d5e3edb2f21026d6c227273950ffd"} Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.502279 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.568748 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-config-data\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.568864 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.568931 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-temporary\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.568964 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config-secret\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.569045 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ssh-key\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.569065 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.569111 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ca-certs\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.569169 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv8mj\" (UniqueName: \"kubernetes.io/projected/b39c8562-3dd8-439a-b17d-967859c86ec2-kube-api-access-nv8mj\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.569242 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-workdir\") pod \"b39c8562-3dd8-439a-b17d-967859c86ec2\" (UID: \"b39c8562-3dd8-439a-b17d-967859c86ec2\") " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.569902 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.570268 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-config-data" (OuterVolumeSpecName: "config-data") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.570546 4970 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.570582 4970 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-config-data\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.575257 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39c8562-3dd8-439a-b17d-967859c86ec2-kube-api-access-nv8mj" (OuterVolumeSpecName: "kube-api-access-nv8mj") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "kube-api-access-nv8mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.577171 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.579955 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.621048 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.624502 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.631727 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.664450 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "b39c8562-3dd8-439a-b17d-967859c86ec2" (UID: "b39c8562-3dd8-439a-b17d-967859c86ec2"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.671946 4970 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.671982 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.672008 4970 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ssh-key\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.672017 4970 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b39c8562-3dd8-439a-b17d-967859c86ec2-openstack-config\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.672027 4970 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/b39c8562-3dd8-439a-b17d-967859c86ec2-ca-certs\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.672037 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv8mj\" (UniqueName: \"kubernetes.io/projected/b39c8562-3dd8-439a-b17d-967859c86ec2-kube-api-access-nv8mj\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.672047 4970 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/b39c8562-3dd8-439a-b17d-967859c86ec2-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.690789 4970 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Sep 30 10:41:51 crc kubenswrapper[4970]: I0930 10:41:51.773359 4970 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Sep 30 10:41:52 crc kubenswrapper[4970]: I0930 10:41:52.140262 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"b39c8562-3dd8-439a-b17d-967859c86ec2","Type":"ContainerDied","Data":"f46c1b2e2485abf9c8600e2de2167688b7f5ebd2b8d2e02f34f654d0bf40a52d"} Sep 30 10:41:52 crc kubenswrapper[4970]: I0930 10:41:52.140329 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f46c1b2e2485abf9c8600e2de2167688b7f5ebd2b8d2e02f34f654d0bf40a52d" Sep 30 10:41:52 crc kubenswrapper[4970]: I0930 10:41:52.140328 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.072836 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Sep 30 10:41:58 crc kubenswrapper[4970]: E0930 10:41:58.073675 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="extract-utilities" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.073689 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="extract-utilities" Sep 30 10:41:58 crc kubenswrapper[4970]: E0930 10:41:58.073696 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="extract-content" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.073703 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="extract-content" Sep 30 10:41:58 crc kubenswrapper[4970]: E0930 10:41:58.073724 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39c8562-3dd8-439a-b17d-967859c86ec2" containerName="tempest-tests-tempest-tests-runner" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.073729 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39c8562-3dd8-439a-b17d-967859c86ec2" containerName="tempest-tests-tempest-tests-runner" Sep 30 10:41:58 crc kubenswrapper[4970]: E0930 10:41:58.073741 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="registry-server" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.073749 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="registry-server" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.073927 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d7af408-8f57-4f60-b083-2de4bb4a9707" containerName="registry-server" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.073948 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39c8562-3dd8-439a-b17d-967859c86ec2" containerName="tempest-tests-tempest-tests-runner" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.074553 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.077181 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dtgsh" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.086035 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.199885 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.200033 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnsjh\" (UniqueName: \"kubernetes.io/projected/7bdc6121-410f-4041-b69f-98368027c449-kube-api-access-fnsjh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.301290 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnsjh\" (UniqueName: \"kubernetes.io/projected/7bdc6121-410f-4041-b69f-98368027c449-kube-api-access-fnsjh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.301440 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.301971 4970 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.329654 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnsjh\" (UniqueName: \"kubernetes.io/projected/7bdc6121-410f-4041-b69f-98368027c449-kube-api-access-fnsjh\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.349802 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"7bdc6121-410f-4041-b69f-98368027c449\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.409785 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Sep 30 10:41:58 crc kubenswrapper[4970]: I0930 10:41:58.872082 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Sep 30 10:41:58 crc kubenswrapper[4970]: W0930 10:41:58.876165 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bdc6121_410f_4041_b69f_98368027c449.slice/crio-aa1a126a1afbdeafc1565313fcd13894aed12f570805e46233ef930a004cc8b4 WatchSource:0}: Error finding container aa1a126a1afbdeafc1565313fcd13894aed12f570805e46233ef930a004cc8b4: Status 404 returned error can't find the container with id aa1a126a1afbdeafc1565313fcd13894aed12f570805e46233ef930a004cc8b4 Sep 30 10:41:59 crc kubenswrapper[4970]: I0930 10:41:59.202847 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7bdc6121-410f-4041-b69f-98368027c449","Type":"ContainerStarted","Data":"aa1a126a1afbdeafc1565313fcd13894aed12f570805e46233ef930a004cc8b4"} Sep 30 10:42:00 crc kubenswrapper[4970]: I0930 10:42:00.213827 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"7bdc6121-410f-4041-b69f-98368027c449","Type":"ContainerStarted","Data":"762bdd9ad91d06d63b46c96d930a98db9875569d32921b33d0d3bee25089445d"} Sep 30 10:42:00 crc kubenswrapper[4970]: I0930 10:42:00.668924 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:42:00 crc kubenswrapper[4970]: E0930 10:42:00.669430 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.086877 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=10.066690691 podStartE2EDuration="11.08685576s" podCreationTimestamp="2025-09-30 10:41:58 +0000 UTC" firstStartedPulling="2025-09-30 10:41:58.87962723 +0000 UTC m=+3331.951478204" lastFinishedPulling="2025-09-30 10:41:59.899792339 +0000 UTC m=+3332.971643273" observedRunningTime="2025-09-30 10:42:00.233516685 +0000 UTC m=+3333.305367649" watchObservedRunningTime="2025-09-30 10:42:09.08685576 +0000 UTC m=+3342.158706694" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.090255 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nc6v"] Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.093687 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.099178 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nc6v"] Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.115858 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-catalog-content\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.116419 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzq5\" (UniqueName: \"kubernetes.io/projected/85478146-cd89-484a-8285-f21a79686e52-kube-api-access-mfzq5\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.116511 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-utilities\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.217476 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-catalog-content\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.217538 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzq5\" (UniqueName: \"kubernetes.io/projected/85478146-cd89-484a-8285-f21a79686e52-kube-api-access-mfzq5\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.217615 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-utilities\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.218272 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-catalog-content\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.218337 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-utilities\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.246377 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzq5\" (UniqueName: \"kubernetes.io/projected/85478146-cd89-484a-8285-f21a79686e52-kube-api-access-mfzq5\") pod \"community-operators-4nc6v\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.435193 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:09 crc kubenswrapper[4970]: I0930 10:42:09.981926 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nc6v"] Sep 30 10:42:10 crc kubenswrapper[4970]: I0930 10:42:10.314106 4970 generic.go:334] "Generic (PLEG): container finished" podID="85478146-cd89-484a-8285-f21a79686e52" containerID="be6d1dfddac9f868d9a69701fc1b0411bcccf23c631942f7af7c8a0853bb1682" exitCode=0 Sep 30 10:42:10 crc kubenswrapper[4970]: I0930 10:42:10.314163 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerDied","Data":"be6d1dfddac9f868d9a69701fc1b0411bcccf23c631942f7af7c8a0853bb1682"} Sep 30 10:42:10 crc kubenswrapper[4970]: I0930 10:42:10.314443 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerStarted","Data":"e47d735494c5efe9df91f7ecc62434e94bdfb8c39f73f30eb36ee9f5d265845d"} Sep 30 10:42:11 crc kubenswrapper[4970]: I0930 10:42:11.329262 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerStarted","Data":"dd44082921f930d72b08ae3fd975d7bd525ff405f52491070067271fa28ec93a"} Sep 30 10:42:12 crc kubenswrapper[4970]: I0930 10:42:12.343167 4970 generic.go:334] "Generic (PLEG): container finished" podID="85478146-cd89-484a-8285-f21a79686e52" containerID="dd44082921f930d72b08ae3fd975d7bd525ff405f52491070067271fa28ec93a" exitCode=0 Sep 30 10:42:12 crc kubenswrapper[4970]: I0930 10:42:12.343250 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerDied","Data":"dd44082921f930d72b08ae3fd975d7bd525ff405f52491070067271fa28ec93a"} Sep 30 10:42:12 crc kubenswrapper[4970]: I0930 10:42:12.669285 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:42:12 crc kubenswrapper[4970]: E0930 10:42:12.669866 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:42:13 crc kubenswrapper[4970]: I0930 10:42:13.356569 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerStarted","Data":"73adf0bd3461d2f18ee3faa2e996a2288344d086e09dc5385ae99506fd17183e"} Sep 30 10:42:13 crc kubenswrapper[4970]: I0930 10:42:13.385225 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nc6v" podStartSLOduration=1.873908737 podStartE2EDuration="4.385197557s" podCreationTimestamp="2025-09-30 10:42:09 +0000 UTC" firstStartedPulling="2025-09-30 10:42:10.316207797 +0000 UTC m=+3343.388058731" lastFinishedPulling="2025-09-30 10:42:12.827496597 +0000 UTC m=+3345.899347551" observedRunningTime="2025-09-30 10:42:13.377709311 +0000 UTC m=+3346.449560275" watchObservedRunningTime="2025-09-30 10:42:13.385197557 +0000 UTC m=+3346.457048501" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.531686 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vb8xq/must-gather-r6xqq"] Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.534187 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.536320 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vb8xq"/"openshift-service-ca.crt" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.536561 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vb8xq"/"kube-root-ca.crt" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.597996 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b47h8\" (UniqueName: \"kubernetes.io/projected/aae4be87-4f8b-4024-bfff-f07824adde63-kube-api-access-b47h8\") pod \"must-gather-r6xqq\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.598085 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aae4be87-4f8b-4024-bfff-f07824adde63-must-gather-output\") pod \"must-gather-r6xqq\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.600937 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vb8xq/must-gather-r6xqq"] Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.700640 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aae4be87-4f8b-4024-bfff-f07824adde63-must-gather-output\") pod \"must-gather-r6xqq\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.701149 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aae4be87-4f8b-4024-bfff-f07824adde63-must-gather-output\") pod \"must-gather-r6xqq\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.702249 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b47h8\" (UniqueName: \"kubernetes.io/projected/aae4be87-4f8b-4024-bfff-f07824adde63-kube-api-access-b47h8\") pod \"must-gather-r6xqq\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.721586 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b47h8\" (UniqueName: \"kubernetes.io/projected/aae4be87-4f8b-4024-bfff-f07824adde63-kube-api-access-b47h8\") pod \"must-gather-r6xqq\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:17 crc kubenswrapper[4970]: I0930 10:42:17.866334 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:42:18 crc kubenswrapper[4970]: I0930 10:42:18.379416 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vb8xq/must-gather-r6xqq"] Sep 30 10:42:18 crc kubenswrapper[4970]: I0930 10:42:18.407266 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" event={"ID":"aae4be87-4f8b-4024-bfff-f07824adde63","Type":"ContainerStarted","Data":"3cc227470ccda2cb88c79597c7cbe495a3abb3aeaaebdd635da64c6a49ab4c2c"} Sep 30 10:42:19 crc kubenswrapper[4970]: I0930 10:42:19.450274 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:19 crc kubenswrapper[4970]: I0930 10:42:19.450785 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:19 crc kubenswrapper[4970]: I0930 10:42:19.516538 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:20 crc kubenswrapper[4970]: I0930 10:42:20.468860 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:20 crc kubenswrapper[4970]: I0930 10:42:20.527918 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nc6v"] Sep 30 10:42:22 crc kubenswrapper[4970]: I0930 10:42:22.444336 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nc6v" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="registry-server" containerID="cri-o://73adf0bd3461d2f18ee3faa2e996a2288344d086e09dc5385ae99506fd17183e" gracePeriod=2 Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.455195 4970 generic.go:334] "Generic (PLEG): container finished" podID="85478146-cd89-484a-8285-f21a79686e52" containerID="73adf0bd3461d2f18ee3faa2e996a2288344d086e09dc5385ae99506fd17183e" exitCode=0 Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.455521 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerDied","Data":"73adf0bd3461d2f18ee3faa2e996a2288344d086e09dc5385ae99506fd17183e"} Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.735656 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.830587 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzq5\" (UniqueName: \"kubernetes.io/projected/85478146-cd89-484a-8285-f21a79686e52-kube-api-access-mfzq5\") pod \"85478146-cd89-484a-8285-f21a79686e52\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.830764 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-utilities\") pod \"85478146-cd89-484a-8285-f21a79686e52\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.830879 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-catalog-content\") pod \"85478146-cd89-484a-8285-f21a79686e52\" (UID: \"85478146-cd89-484a-8285-f21a79686e52\") " Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.831950 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-utilities" (OuterVolumeSpecName: "utilities") pod "85478146-cd89-484a-8285-f21a79686e52" (UID: "85478146-cd89-484a-8285-f21a79686e52"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.836680 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85478146-cd89-484a-8285-f21a79686e52-kube-api-access-mfzq5" (OuterVolumeSpecName: "kube-api-access-mfzq5") pod "85478146-cd89-484a-8285-f21a79686e52" (UID: "85478146-cd89-484a-8285-f21a79686e52"). InnerVolumeSpecName "kube-api-access-mfzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.888470 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85478146-cd89-484a-8285-f21a79686e52" (UID: "85478146-cd89-484a-8285-f21a79686e52"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.933467 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.933494 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85478146-cd89-484a-8285-f21a79686e52-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:42:23 crc kubenswrapper[4970]: I0930 10:42:23.933504 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfzq5\" (UniqueName: \"kubernetes.io/projected/85478146-cd89-484a-8285-f21a79686e52-kube-api-access-mfzq5\") on node \"crc\" DevicePath \"\"" Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.466792 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" event={"ID":"aae4be87-4f8b-4024-bfff-f07824adde63","Type":"ContainerStarted","Data":"b3422e91cd626e467332925b20321111e9acfb8a2178171ba72cbba59b50160d"} Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.467248 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" event={"ID":"aae4be87-4f8b-4024-bfff-f07824adde63","Type":"ContainerStarted","Data":"23536f499743797a9a66962965dfb4d2cb9fe2d793cc9064ba720eaae77310c7"} Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.469061 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nc6v" event={"ID":"85478146-cd89-484a-8285-f21a79686e52","Type":"ContainerDied","Data":"e47d735494c5efe9df91f7ecc62434e94bdfb8c39f73f30eb36ee9f5d265845d"} Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.469123 4970 scope.go:117] "RemoveContainer" containerID="73adf0bd3461d2f18ee3faa2e996a2288344d086e09dc5385ae99506fd17183e" Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.469123 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nc6v" Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.484194 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" podStartSLOduration=2.111211589 podStartE2EDuration="7.484174892s" podCreationTimestamp="2025-09-30 10:42:17 +0000 UTC" firstStartedPulling="2025-09-30 10:42:18.389522795 +0000 UTC m=+3351.461373729" lastFinishedPulling="2025-09-30 10:42:23.762486088 +0000 UTC m=+3356.834337032" observedRunningTime="2025-09-30 10:42:24.480207593 +0000 UTC m=+3357.552058707" watchObservedRunningTime="2025-09-30 10:42:24.484174892 +0000 UTC m=+3357.556025826" Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.500650 4970 scope.go:117] "RemoveContainer" containerID="dd44082921f930d72b08ae3fd975d7bd525ff405f52491070067271fa28ec93a" Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.522502 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nc6v"] Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.532049 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nc6v"] Sep 30 10:42:24 crc kubenswrapper[4970]: I0930 10:42:24.544404 4970 scope.go:117] "RemoveContainer" containerID="be6d1dfddac9f868d9a69701fc1b0411bcccf23c631942f7af7c8a0853bb1682" Sep 30 10:42:25 crc kubenswrapper[4970]: I0930 10:42:25.681760 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85478146-cd89-484a-8285-f21a79686e52" path="/var/lib/kubelet/pods/85478146-cd89-484a-8285-f21a79686e52/volumes" Sep 30 10:42:26 crc kubenswrapper[4970]: I0930 10:42:26.669028 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:42:26 crc kubenswrapper[4970]: E0930 10:42:26.669327 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.771404 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-m5gzw"] Sep 30 10:42:27 crc kubenswrapper[4970]: E0930 10:42:27.772103 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="extract-content" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.772120 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="extract-content" Sep 30 10:42:27 crc kubenswrapper[4970]: E0930 10:42:27.772158 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="registry-server" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.772166 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="registry-server" Sep 30 10:42:27 crc kubenswrapper[4970]: E0930 10:42:27.772184 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="extract-utilities" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.772192 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="extract-utilities" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.772403 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="85478146-cd89-484a-8285-f21a79686e52" containerName="registry-server" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.773191 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.775244 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vb8xq"/"default-dockercfg-5nfx8" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.795945 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/589be5aa-762c-482b-8ef0-e47b5fb2251c-host\") pod \"crc-debug-m5gzw\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.796039 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsjgg\" (UniqueName: \"kubernetes.io/projected/589be5aa-762c-482b-8ef0-e47b5fb2251c-kube-api-access-dsjgg\") pod \"crc-debug-m5gzw\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.897301 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/589be5aa-762c-482b-8ef0-e47b5fb2251c-host\") pod \"crc-debug-m5gzw\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.897385 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsjgg\" (UniqueName: \"kubernetes.io/projected/589be5aa-762c-482b-8ef0-e47b5fb2251c-kube-api-access-dsjgg\") pod \"crc-debug-m5gzw\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.897521 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/589be5aa-762c-482b-8ef0-e47b5fb2251c-host\") pod \"crc-debug-m5gzw\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:27 crc kubenswrapper[4970]: I0930 10:42:27.939189 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsjgg\" (UniqueName: \"kubernetes.io/projected/589be5aa-762c-482b-8ef0-e47b5fb2251c-kube-api-access-dsjgg\") pod \"crc-debug-m5gzw\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:28 crc kubenswrapper[4970]: I0930 10:42:28.090512 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:42:28 crc kubenswrapper[4970]: W0930 10:42:28.132166 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod589be5aa_762c_482b_8ef0_e47b5fb2251c.slice/crio-51097adb61d53e3f3569ee0d33fdb85c0a819405b0c88b0c7b9f002d35f735d2 WatchSource:0}: Error finding container 51097adb61d53e3f3569ee0d33fdb85c0a819405b0c88b0c7b9f002d35f735d2: Status 404 returned error can't find the container with id 51097adb61d53e3f3569ee0d33fdb85c0a819405b0c88b0c7b9f002d35f735d2 Sep 30 10:42:28 crc kubenswrapper[4970]: I0930 10:42:28.506226 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" event={"ID":"589be5aa-762c-482b-8ef0-e47b5fb2251c","Type":"ContainerStarted","Data":"51097adb61d53e3f3569ee0d33fdb85c0a819405b0c88b0c7b9f002d35f735d2"} Sep 30 10:42:40 crc kubenswrapper[4970]: I0930 10:42:40.668749 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:42:40 crc kubenswrapper[4970]: E0930 10:42:40.669524 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:42:45 crc kubenswrapper[4970]: E0930 10:42:45.869842 4970 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Sep 30 10:42:45 crc kubenswrapper[4970]: E0930 10:42:45.870394 4970 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsjgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-m5gzw_openshift-must-gather-vb8xq(589be5aa-762c-482b-8ef0-e47b5fb2251c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Sep 30 10:42:45 crc kubenswrapper[4970]: E0930 10:42:45.871611 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" podUID="589be5aa-762c-482b-8ef0-e47b5fb2251c" Sep 30 10:42:46 crc kubenswrapper[4970]: E0930 10:42:46.670916 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" podUID="589be5aa-762c-482b-8ef0-e47b5fb2251c" Sep 30 10:42:52 crc kubenswrapper[4970]: I0930 10:42:52.668734 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:42:52 crc kubenswrapper[4970]: E0930 10:42:52.669844 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:42:59 crc kubenswrapper[4970]: I0930 10:42:59.781691 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" event={"ID":"589be5aa-762c-482b-8ef0-e47b5fb2251c","Type":"ContainerStarted","Data":"83c590c7c5aea36fc973f4b22d1e5455501e86c93b47d2fd3bae0d2485b08b30"} Sep 30 10:42:59 crc kubenswrapper[4970]: I0930 10:42:59.822933 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" podStartSLOduration=1.73937199 podStartE2EDuration="32.822913543s" podCreationTimestamp="2025-09-30 10:42:27 +0000 UTC" firstStartedPulling="2025-09-30 10:42:28.134502172 +0000 UTC m=+3361.206353106" lastFinishedPulling="2025-09-30 10:42:59.218043725 +0000 UTC m=+3392.289894659" observedRunningTime="2025-09-30 10:42:59.81771518 +0000 UTC m=+3392.889566134" watchObservedRunningTime="2025-09-30 10:42:59.822913543 +0000 UTC m=+3392.894764497" Sep 30 10:43:03 crc kubenswrapper[4970]: I0930 10:43:03.668491 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:43:03 crc kubenswrapper[4970]: E0930 10:43:03.669138 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:43:14 crc kubenswrapper[4970]: I0930 10:43:14.668725 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:43:14 crc kubenswrapper[4970]: E0930 10:43:14.669550 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:43:25 crc kubenswrapper[4970]: I0930 10:43:25.669458 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:43:25 crc kubenswrapper[4970]: E0930 10:43:25.670285 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:43:26 crc kubenswrapper[4970]: I0930 10:43:26.021737 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-868647ddbb-dxwsf_bbcbf5f3-02eb-4969-af25-0c219017b29a/barbican-api/0.log" Sep 30 10:43:26 crc kubenswrapper[4970]: I0930 10:43:26.058389 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-868647ddbb-dxwsf_bbcbf5f3-02eb-4969-af25-0c219017b29a/barbican-api-log/0.log" Sep 30 10:43:26 crc kubenswrapper[4970]: I0930 10:43:26.217806 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bb9f86b-7tl55_cb32bf8e-e046-4e85-87b2-56993b0e6e30/barbican-keystone-listener/0.log" Sep 30 10:43:26 crc kubenswrapper[4970]: I0930 10:43:26.382682 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bb9f86b-7tl55_cb32bf8e-e046-4e85-87b2-56993b0e6e30/barbican-keystone-listener-log/0.log" Sep 30 10:43:26 crc kubenswrapper[4970]: I0930 10:43:26.911013 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-767b995857-mf5zx_f4d4c15f-9170-47c3-9716-919828e0cb40/barbican-worker/0.log" Sep 30 10:43:26 crc kubenswrapper[4970]: I0930 10:43:26.940926 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-767b995857-mf5zx_f4d4c15f-9170-47c3-9716-919828e0cb40/barbican-worker-log/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.118151 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9_f4b4ad42-77b6-450b-befa-9bb0012fe9ae/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.184853 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/ceilometer-central-agent/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.347128 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/ceilometer-notification-agent/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.402661 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/proxy-httpd/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.420078 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/sg-core/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.611350 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb0312b9-337a-4175-ae77-cd4964578d13/cinder-api-log/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.617309 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb0312b9-337a-4175-ae77-cd4964578d13/cinder-api/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.829237 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7510aa65-ae21-4344-94a7-9354f0822ae3/cinder-scheduler/0.log" Sep 30 10:43:27 crc kubenswrapper[4970]: I0930 10:43:27.875976 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7510aa65-ae21-4344-94a7-9354f0822ae3/probe/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.136236 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd_d70e09c1-47df-4742-b2fc-77c354169b46/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.308903 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r_039233e2-0b03-4514-b359-5552e4d09ffc/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.400635 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-6bk6t_90db2984-4178-487d-812f-a51f345ae911/init/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.549070 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-6bk6t_90db2984-4178-487d-812f-a51f345ae911/init/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.602660 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-6bk6t_90db2984-4178-487d-812f-a51f345ae911/dnsmasq-dns/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.781561 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-77k7p_877ee61c-4abb-4daf-a42d-f2d26afbe137/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.860678 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e691ead3-4698-47b1-9ea4-b63f8e649a34/glance-httpd/0.log" Sep 30 10:43:28 crc kubenswrapper[4970]: I0930 10:43:28.991909 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e691ead3-4698-47b1-9ea4-b63f8e649a34/glance-log/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.141716 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_354c5f9e-ca1b-4724-960f-a376abda6ee2/glance-httpd/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.223137 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_354c5f9e-ca1b-4724-960f-a376abda6ee2/glance-log/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.444630 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8fbb4f9c8-n8t5n_8b479413-73f2-4159-8ec6-5e23f139c53c/horizon/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.496223 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc_d25a3525-d30d-4e16-b446-dace1e7987a0/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.670028 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8fbb4f9c8-n8t5n_8b479413-73f2-4159-8ec6-5e23f139c53c/horizon-log/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.777078 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kl5qr_2f87be4d-7eec-4133-a07f-4cbe2b88548f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:29 crc kubenswrapper[4970]: I0930 10:43:29.981847 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_fd6ccdd4-d83d-47fe-8283-b4625ad7d17f/kube-state-metrics/0.log" Sep 30 10:43:30 crc kubenswrapper[4970]: I0930 10:43:30.016323 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cf9989bfd-qxb2j_f1a85d9a-3eae-49ff-af87-14d444dec7d6/keystone-api/0.log" Sep 30 10:43:30 crc kubenswrapper[4970]: I0930 10:43:30.177080 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jgcst_109a756f-75b7-4ce1-a45f-3363d2d4097e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:30 crc kubenswrapper[4970]: I0930 10:43:30.518375 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-65fc8b84cc-9lm9w_7244a0da-0989-4ba5-be03-aab3ab0fadce/neutron-httpd/0.log" Sep 30 10:43:30 crc kubenswrapper[4970]: I0930 10:43:30.539118 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-65fc8b84cc-9lm9w_7244a0da-0989-4ba5-be03-aab3ab0fadce/neutron-api/0.log" Sep 30 10:43:30 crc kubenswrapper[4970]: I0930 10:43:30.797146 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm_be128dee-e0c7-4db4-a760-b03c3b8d263d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:31 crc kubenswrapper[4970]: I0930 10:43:31.358759 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b1eaeb05-1ae9-4640-bc87-da6567c4f1a1/nova-api-log/0.log" Sep 30 10:43:31 crc kubenswrapper[4970]: I0930 10:43:31.525583 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_66fda279-0629-46e9-8f55-145febd6facd/nova-cell0-conductor-conductor/0.log" Sep 30 10:43:31 crc kubenswrapper[4970]: I0930 10:43:31.544633 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b1eaeb05-1ae9-4640-bc87-da6567c4f1a1/nova-api-api/0.log" Sep 30 10:43:31 crc kubenswrapper[4970]: I0930 10:43:31.854858 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3e334a93-12b2-402f-97e7-d5f77c7cb8bc/nova-cell1-conductor-conductor/0.log" Sep 30 10:43:31 crc kubenswrapper[4970]: I0930 10:43:31.968466 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e81a0f38-b543-4aa5-aef9-fc02f91800e5/nova-cell1-novncproxy-novncproxy/0.log" Sep 30 10:43:32 crc kubenswrapper[4970]: I0930 10:43:32.222612 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-428p6_4f461d08-f275-49fd-be5d-3f4198d81343/nova-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:32 crc kubenswrapper[4970]: I0930 10:43:32.401317 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fec6d022-057f-4f80-9da1-25c1f4e1544e/nova-metadata-log/0.log" Sep 30 10:43:32 crc kubenswrapper[4970]: I0930 10:43:32.855690 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c599b731-6bc5-4882-9f48-0abfa125f843/nova-scheduler-scheduler/0.log" Sep 30 10:43:32 crc kubenswrapper[4970]: I0930 10:43:32.983123 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d8ffcf-dc53-4fac-92a0-64136b4b0d4b/mysql-bootstrap/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.231425 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d8ffcf-dc53-4fac-92a0-64136b4b0d4b/mysql-bootstrap/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.279880 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d8ffcf-dc53-4fac-92a0-64136b4b0d4b/galera/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.532167 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dee5bc19-bb45-4962-adaf-ff6561817272/mysql-bootstrap/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.587879 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fec6d022-057f-4f80-9da1-25c1f4e1544e/nova-metadata-metadata/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.705826 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dee5bc19-bb45-4962-adaf-ff6561817272/mysql-bootstrap/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.729473 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dee5bc19-bb45-4962-adaf-ff6561817272/galera/0.log" Sep 30 10:43:33 crc kubenswrapper[4970]: I0930 10:43:33.918387 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8d91e0e8-ee07-493a-bb4d-6949ce548047/openstackclient/0.log" Sep 30 10:43:34 crc kubenswrapper[4970]: I0930 10:43:34.056209 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6v2k7_20c3b444-9843-4584-81cb-9e5cb444c98b/openstack-network-exporter/0.log" Sep 30 10:43:34 crc kubenswrapper[4970]: I0930 10:43:34.268598 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovsdb-server-init/0.log" Sep 30 10:43:34 crc kubenswrapper[4970]: I0930 10:43:34.441073 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovsdb-server-init/0.log" Sep 30 10:43:34 crc kubenswrapper[4970]: I0930 10:43:34.487572 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovsdb-server/0.log" Sep 30 10:43:34 crc kubenswrapper[4970]: I0930 10:43:34.504387 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovs-vswitchd/0.log" Sep 30 10:43:34 crc kubenswrapper[4970]: I0930 10:43:34.794899 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vtdnt_ea8f06d0-75e0-4ed8-9e37-086886b019e5/ovn-controller/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.092233 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dwx4r_450706d9-395f-417b-b37d-7ded156dce3a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.095594 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_57282deb-d1f0-4e71-90e2-71c39075d208/openstack-network-exporter/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.260666 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_57282deb-d1f0-4e71-90e2-71c39075d208/ovn-northd/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.409542 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_23d91298-5a5e-428e-afe3-f5625b74f3e0/openstack-network-exporter/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.492245 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_23d91298-5a5e-428e-afe3-f5625b74f3e0/ovsdbserver-nb/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.631719 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6fb84c2f-32c7-4ac2-b7aa-343846c86bfa/openstack-network-exporter/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.729753 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6fb84c2f-32c7-4ac2-b7aa-343846c86bfa/ovsdbserver-sb/0.log" Sep 30 10:43:35 crc kubenswrapper[4970]: I0930 10:43:35.963363 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cc9569d-ll5d9_16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890/placement-api/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.079493 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cc9569d-ll5d9_16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890/placement-log/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.164189 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7690d385-7ca5-472c-8ee7-5c3aa4030951/setup-container/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.442455 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7690d385-7ca5-472c-8ee7-5c3aa4030951/setup-container/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.534644 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7690d385-7ca5-472c-8ee7-5c3aa4030951/rabbitmq/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.644644 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4582e248-17bf-40bb-9072-f64d72d1fc82/setup-container/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.812081 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4582e248-17bf-40bb-9072-f64d72d1fc82/setup-container/0.log" Sep 30 10:43:36 crc kubenswrapper[4970]: I0930 10:43:36.880932 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4582e248-17bf-40bb-9072-f64d72d1fc82/rabbitmq/0.log" Sep 30 10:43:37 crc kubenswrapper[4970]: I0930 10:43:37.064120 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj_0cf74a13-4f04-472b-af17-1c856152950f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:37 crc kubenswrapper[4970]: I0930 10:43:37.116488 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-bbkpz_a934a1c0-31ef-4341-a85f-a13cd865adc1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:37 crc kubenswrapper[4970]: I0930 10:43:37.320086 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr_ebfb47a0-7f15-4839-b84d-7aef631222f8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:37 crc kubenswrapper[4970]: I0930 10:43:37.525569 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-vg96h_74d48203-8780-4ee2-8db2-39388705bab0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:37 crc kubenswrapper[4970]: I0930 10:43:37.678184 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:43:37 crc kubenswrapper[4970]: I0930 10:43:37.705619 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fccfg_2d7598c9-6363-43e1-8913-1b7707fb57eb/ssh-known-hosts-edpm-deployment/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.013140 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-597dc56955-zfx9s_1ade2be1-8027-4a99-ae4d-f0394e4d9c1d/proxy-httpd/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.025848 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-597dc56955-zfx9s_1ade2be1-8027-4a99-ae4d-f0394e4d9c1d/proxy-server/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.155608 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"09447c97669fe66df87c231f3f61ffa87f4993dc0b0714ceaa7aee2d0c6465bc"} Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.249673 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ctbn9_9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5/swift-ring-rebalance/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.371639 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-auditor/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.503199 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-reaper/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.645876 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-server/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.697353 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-replicator/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.719697 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-auditor/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.959592 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-updater/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.971482 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-server/0.log" Sep 30 10:43:38 crc kubenswrapper[4970]: I0930 10:43:38.974318 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-replicator/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.199899 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-auditor/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.222369 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-replicator/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.271960 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-expirer/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.417756 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-server/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.459676 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-updater/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.535435 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/rsync/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.620250 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/swift-recon-cron/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.773261 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6_54899213-55ca-42b6-8838-e42c962341b6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:39 crc kubenswrapper[4970]: I0930 10:43:39.966972 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b39c8562-3dd8-439a-b17d-967859c86ec2/tempest-tests-tempest-tests-runner/0.log" Sep 30 10:43:40 crc kubenswrapper[4970]: I0930 10:43:40.172258 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7bdc6121-410f-4041-b69f-98368027c449/test-operator-logs-container/0.log" Sep 30 10:43:40 crc kubenswrapper[4970]: I0930 10:43:40.265464 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nf84v_65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:43:44 crc kubenswrapper[4970]: I0930 10:43:44.859079 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d524179d-ea87-48d3-b87f-da18d0a059c8/memcached/0.log" Sep 30 10:44:51 crc kubenswrapper[4970]: I0930 10:44:51.938361 4970 generic.go:334] "Generic (PLEG): container finished" podID="589be5aa-762c-482b-8ef0-e47b5fb2251c" containerID="83c590c7c5aea36fc973f4b22d1e5455501e86c93b47d2fd3bae0d2485b08b30" exitCode=0 Sep 30 10:44:51 crc kubenswrapper[4970]: I0930 10:44:51.938455 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" event={"ID":"589be5aa-762c-482b-8ef0-e47b5fb2251c","Type":"ContainerDied","Data":"83c590c7c5aea36fc973f4b22d1e5455501e86c93b47d2fd3bae0d2485b08b30"} Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.098463 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.138641 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-m5gzw"] Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.150120 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-m5gzw"] Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.256097 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/589be5aa-762c-482b-8ef0-e47b5fb2251c-host\") pod \"589be5aa-762c-482b-8ef0-e47b5fb2251c\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.256241 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/589be5aa-762c-482b-8ef0-e47b5fb2251c-host" (OuterVolumeSpecName: "host") pod "589be5aa-762c-482b-8ef0-e47b5fb2251c" (UID: "589be5aa-762c-482b-8ef0-e47b5fb2251c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.256883 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsjgg\" (UniqueName: \"kubernetes.io/projected/589be5aa-762c-482b-8ef0-e47b5fb2251c-kube-api-access-dsjgg\") pod \"589be5aa-762c-482b-8ef0-e47b5fb2251c\" (UID: \"589be5aa-762c-482b-8ef0-e47b5fb2251c\") " Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.257404 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/589be5aa-762c-482b-8ef0-e47b5fb2251c-host\") on node \"crc\" DevicePath \"\"" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.264975 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/589be5aa-762c-482b-8ef0-e47b5fb2251c-kube-api-access-dsjgg" (OuterVolumeSpecName: "kube-api-access-dsjgg") pod "589be5aa-762c-482b-8ef0-e47b5fb2251c" (UID: "589be5aa-762c-482b-8ef0-e47b5fb2251c"). InnerVolumeSpecName "kube-api-access-dsjgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.359974 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsjgg\" (UniqueName: \"kubernetes.io/projected/589be5aa-762c-482b-8ef0-e47b5fb2251c-kube-api-access-dsjgg\") on node \"crc\" DevicePath \"\"" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.686241 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="589be5aa-762c-482b-8ef0-e47b5fb2251c" path="/var/lib/kubelet/pods/589be5aa-762c-482b-8ef0-e47b5fb2251c/volumes" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.970773 4970 scope.go:117] "RemoveContainer" containerID="83c590c7c5aea36fc973f4b22d1e5455501e86c93b47d2fd3bae0d2485b08b30" Sep 30 10:44:53 crc kubenswrapper[4970]: I0930 10:44:53.970880 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-m5gzw" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.347091 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-l4s5b"] Sep 30 10:44:54 crc kubenswrapper[4970]: E0930 10:44:54.347530 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="589be5aa-762c-482b-8ef0-e47b5fb2251c" containerName="container-00" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.347546 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="589be5aa-762c-482b-8ef0-e47b5fb2251c" containerName="container-00" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.347811 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="589be5aa-762c-482b-8ef0-e47b5fb2251c" containerName="container-00" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.348607 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.351448 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vb8xq"/"default-dockercfg-5nfx8" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.482641 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad200f62-b229-40bf-8b95-40f71a615e5d-host\") pod \"crc-debug-l4s5b\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.483120 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64cl7\" (UniqueName: \"kubernetes.io/projected/ad200f62-b229-40bf-8b95-40f71a615e5d-kube-api-access-64cl7\") pod \"crc-debug-l4s5b\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.585922 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64cl7\" (UniqueName: \"kubernetes.io/projected/ad200f62-b229-40bf-8b95-40f71a615e5d-kube-api-access-64cl7\") pod \"crc-debug-l4s5b\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.586252 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad200f62-b229-40bf-8b95-40f71a615e5d-host\") pod \"crc-debug-l4s5b\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.586403 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad200f62-b229-40bf-8b95-40f71a615e5d-host\") pod \"crc-debug-l4s5b\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.619148 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64cl7\" (UniqueName: \"kubernetes.io/projected/ad200f62-b229-40bf-8b95-40f71a615e5d-kube-api-access-64cl7\") pod \"crc-debug-l4s5b\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.672811 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:54 crc kubenswrapper[4970]: I0930 10:44:54.981258 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" event={"ID":"ad200f62-b229-40bf-8b95-40f71a615e5d","Type":"ContainerStarted","Data":"28bcb0e63c9dbc644ca14858ab46941baf623214e204e5b8d7c0d79bb504ff93"} Sep 30 10:44:56 crc kubenswrapper[4970]: I0930 10:44:56.001565 4970 generic.go:334] "Generic (PLEG): container finished" podID="ad200f62-b229-40bf-8b95-40f71a615e5d" containerID="dc144096e06c16bfbd4f1871e07a19b274d4005880dc0f229cb4b754c8a708c9" exitCode=0 Sep 30 10:44:56 crc kubenswrapper[4970]: I0930 10:44:56.001802 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" event={"ID":"ad200f62-b229-40bf-8b95-40f71a615e5d","Type":"ContainerDied","Data":"dc144096e06c16bfbd4f1871e07a19b274d4005880dc0f229cb4b754c8a708c9"} Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.114622 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.148909 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64cl7\" (UniqueName: \"kubernetes.io/projected/ad200f62-b229-40bf-8b95-40f71a615e5d-kube-api-access-64cl7\") pod \"ad200f62-b229-40bf-8b95-40f71a615e5d\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.149032 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad200f62-b229-40bf-8b95-40f71a615e5d-host\") pod \"ad200f62-b229-40bf-8b95-40f71a615e5d\" (UID: \"ad200f62-b229-40bf-8b95-40f71a615e5d\") " Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.149733 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad200f62-b229-40bf-8b95-40f71a615e5d-host" (OuterVolumeSpecName: "host") pod "ad200f62-b229-40bf-8b95-40f71a615e5d" (UID: "ad200f62-b229-40bf-8b95-40f71a615e5d"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.155981 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad200f62-b229-40bf-8b95-40f71a615e5d-kube-api-access-64cl7" (OuterVolumeSpecName: "kube-api-access-64cl7") pod "ad200f62-b229-40bf-8b95-40f71a615e5d" (UID: "ad200f62-b229-40bf-8b95-40f71a615e5d"). InnerVolumeSpecName "kube-api-access-64cl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.250605 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64cl7\" (UniqueName: \"kubernetes.io/projected/ad200f62-b229-40bf-8b95-40f71a615e5d-kube-api-access-64cl7\") on node \"crc\" DevicePath \"\"" Sep 30 10:44:57 crc kubenswrapper[4970]: I0930 10:44:57.250636 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ad200f62-b229-40bf-8b95-40f71a615e5d-host\") on node \"crc\" DevicePath \"\"" Sep 30 10:44:58 crc kubenswrapper[4970]: I0930 10:44:58.022919 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" event={"ID":"ad200f62-b229-40bf-8b95-40f71a615e5d","Type":"ContainerDied","Data":"28bcb0e63c9dbc644ca14858ab46941baf623214e204e5b8d7c0d79bb504ff93"} Sep 30 10:44:58 crc kubenswrapper[4970]: I0930 10:44:58.023501 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28bcb0e63c9dbc644ca14858ab46941baf623214e204e5b8d7c0d79bb504ff93" Sep 30 10:44:58 crc kubenswrapper[4970]: I0930 10:44:58.023088 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-l4s5b" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.178002 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2"] Sep 30 10:45:00 crc kubenswrapper[4970]: E0930 10:45:00.178712 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad200f62-b229-40bf-8b95-40f71a615e5d" containerName="container-00" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.178729 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad200f62-b229-40bf-8b95-40f71a615e5d" containerName="container-00" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.179016 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad200f62-b229-40bf-8b95-40f71a615e5d" containerName="container-00" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.179750 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.181525 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.181613 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.192276 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsb5g\" (UniqueName: \"kubernetes.io/projected/1bd70a71-10c9-4817-8af5-7bcc460ecf41-kube-api-access-fsb5g\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.193101 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd70a71-10c9-4817-8af5-7bcc460ecf41-config-volume\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.193154 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd70a71-10c9-4817-8af5-7bcc460ecf41-secret-volume\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.208703 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2"] Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.298270 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd70a71-10c9-4817-8af5-7bcc460ecf41-config-volume\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.298524 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd70a71-10c9-4817-8af5-7bcc460ecf41-secret-volume\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.298709 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsb5g\" (UniqueName: \"kubernetes.io/projected/1bd70a71-10c9-4817-8af5-7bcc460ecf41-kube-api-access-fsb5g\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.299678 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd70a71-10c9-4817-8af5-7bcc460ecf41-config-volume\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.305600 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd70a71-10c9-4817-8af5-7bcc460ecf41-secret-volume\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.315708 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsb5g\" (UniqueName: \"kubernetes.io/projected/1bd70a71-10c9-4817-8af5-7bcc460ecf41-kube-api-access-fsb5g\") pod \"collect-profiles-29320485-nmkc2\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.518206 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:00 crc kubenswrapper[4970]: I0930 10:45:00.973517 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2"] Sep 30 10:45:01 crc kubenswrapper[4970]: I0930 10:45:01.050398 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" event={"ID":"1bd70a71-10c9-4817-8af5-7bcc460ecf41","Type":"ContainerStarted","Data":"2ee32ad3ac98f1b08bbf5003e69b212c14c9193d5c149f915eaa983f8c060b2b"} Sep 30 10:45:02 crc kubenswrapper[4970]: I0930 10:45:02.059294 4970 generic.go:334] "Generic (PLEG): container finished" podID="1bd70a71-10c9-4817-8af5-7bcc460ecf41" containerID="190b8e3b3e67ec416323f5d704fe975e6b03d3555a685c88daab002d39e69bda" exitCode=0 Sep 30 10:45:02 crc kubenswrapper[4970]: I0930 10:45:02.059353 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" event={"ID":"1bd70a71-10c9-4817-8af5-7bcc460ecf41","Type":"ContainerDied","Data":"190b8e3b3e67ec416323f5d704fe975e6b03d3555a685c88daab002d39e69bda"} Sep 30 10:45:02 crc kubenswrapper[4970]: I0930 10:45:02.842739 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-l4s5b"] Sep 30 10:45:02 crc kubenswrapper[4970]: I0930 10:45:02.850354 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-l4s5b"] Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.455839 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.575531 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsb5g\" (UniqueName: \"kubernetes.io/projected/1bd70a71-10c9-4817-8af5-7bcc460ecf41-kube-api-access-fsb5g\") pod \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.575573 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd70a71-10c9-4817-8af5-7bcc460ecf41-secret-volume\") pod \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.575667 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd70a71-10c9-4817-8af5-7bcc460ecf41-config-volume\") pod \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\" (UID: \"1bd70a71-10c9-4817-8af5-7bcc460ecf41\") " Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.576552 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd70a71-10c9-4817-8af5-7bcc460ecf41-config-volume" (OuterVolumeSpecName: "config-volume") pod "1bd70a71-10c9-4817-8af5-7bcc460ecf41" (UID: "1bd70a71-10c9-4817-8af5-7bcc460ecf41"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.580806 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bd70a71-10c9-4817-8af5-7bcc460ecf41-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1bd70a71-10c9-4817-8af5-7bcc460ecf41" (UID: "1bd70a71-10c9-4817-8af5-7bcc460ecf41"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.582643 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd70a71-10c9-4817-8af5-7bcc460ecf41-kube-api-access-fsb5g" (OuterVolumeSpecName: "kube-api-access-fsb5g") pod "1bd70a71-10c9-4817-8af5-7bcc460ecf41" (UID: "1bd70a71-10c9-4817-8af5-7bcc460ecf41"). InnerVolumeSpecName "kube-api-access-fsb5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.677871 4970 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd70a71-10c9-4817-8af5-7bcc460ecf41-config-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.677934 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsb5g\" (UniqueName: \"kubernetes.io/projected/1bd70a71-10c9-4817-8af5-7bcc460ecf41-kube-api-access-fsb5g\") on node \"crc\" DevicePath \"\"" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.677958 4970 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd70a71-10c9-4817-8af5-7bcc460ecf41-secret-volume\") on node \"crc\" DevicePath \"\"" Sep 30 10:45:03 crc kubenswrapper[4970]: I0930 10:45:03.682213 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad200f62-b229-40bf-8b95-40f71a615e5d" path="/var/lib/kubelet/pods/ad200f62-b229-40bf-8b95-40f71a615e5d/volumes" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.020947 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-xfwgk"] Sep 30 10:45:04 crc kubenswrapper[4970]: E0930 10:45:04.021760 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd70a71-10c9-4817-8af5-7bcc460ecf41" containerName="collect-profiles" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.021783 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd70a71-10c9-4817-8af5-7bcc460ecf41" containerName="collect-profiles" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.022059 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd70a71-10c9-4817-8af5-7bcc460ecf41" containerName="collect-profiles" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.023130 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.027284 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-vb8xq"/"default-dockercfg-5nfx8" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.082088 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" event={"ID":"1bd70a71-10c9-4817-8af5-7bcc460ecf41","Type":"ContainerDied","Data":"2ee32ad3ac98f1b08bbf5003e69b212c14c9193d5c149f915eaa983f8c060b2b"} Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.082440 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ee32ad3ac98f1b08bbf5003e69b212c14c9193d5c149f915eaa983f8c060b2b" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.082155 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29320485-nmkc2" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.091638 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hshxx\" (UniqueName: \"kubernetes.io/projected/6f117767-125a-4a83-b88d-f910dfc59ea4-kube-api-access-hshxx\") pod \"crc-debug-xfwgk\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.091929 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f117767-125a-4a83-b88d-f910dfc59ea4-host\") pod \"crc-debug-xfwgk\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.193785 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hshxx\" (UniqueName: \"kubernetes.io/projected/6f117767-125a-4a83-b88d-f910dfc59ea4-kube-api-access-hshxx\") pod \"crc-debug-xfwgk\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.193877 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f117767-125a-4a83-b88d-f910dfc59ea4-host\") pod \"crc-debug-xfwgk\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.193955 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f117767-125a-4a83-b88d-f910dfc59ea4-host\") pod \"crc-debug-xfwgk\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.210873 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hshxx\" (UniqueName: \"kubernetes.io/projected/6f117767-125a-4a83-b88d-f910dfc59ea4-kube-api-access-hshxx\") pod \"crc-debug-xfwgk\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.346147 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:04 crc kubenswrapper[4970]: W0930 10:45:04.379885 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f117767_125a_4a83_b88d_f910dfc59ea4.slice/crio-e4216812be01f9bb72f0a4600bb94ea34be9cc493a89ede8e02122649ceb8eec WatchSource:0}: Error finding container e4216812be01f9bb72f0a4600bb94ea34be9cc493a89ede8e02122649ceb8eec: Status 404 returned error can't find the container with id e4216812be01f9bb72f0a4600bb94ea34be9cc493a89ede8e02122649ceb8eec Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.543339 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr"] Sep 30 10:45:04 crc kubenswrapper[4970]: I0930 10:45:04.553975 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29320440-qpqkr"] Sep 30 10:45:05 crc kubenswrapper[4970]: I0930 10:45:05.091734 4970 generic.go:334] "Generic (PLEG): container finished" podID="6f117767-125a-4a83-b88d-f910dfc59ea4" containerID="99507b44b0d7f5d90c1390df045ccf164516d99c1f0a182bfed56ec1874d8cd9" exitCode=0 Sep 30 10:45:05 crc kubenswrapper[4970]: I0930 10:45:05.091820 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" event={"ID":"6f117767-125a-4a83-b88d-f910dfc59ea4","Type":"ContainerDied","Data":"99507b44b0d7f5d90c1390df045ccf164516d99c1f0a182bfed56ec1874d8cd9"} Sep 30 10:45:05 crc kubenswrapper[4970]: I0930 10:45:05.092075 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" event={"ID":"6f117767-125a-4a83-b88d-f910dfc59ea4","Type":"ContainerStarted","Data":"e4216812be01f9bb72f0a4600bb94ea34be9cc493a89ede8e02122649ceb8eec"} Sep 30 10:45:05 crc kubenswrapper[4970]: I0930 10:45:05.138491 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-xfwgk"] Sep 30 10:45:05 crc kubenswrapper[4970]: I0930 10:45:05.151630 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vb8xq/crc-debug-xfwgk"] Sep 30 10:45:05 crc kubenswrapper[4970]: I0930 10:45:05.680236 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1986e5-82d1-4ad7-b0db-f3bd67c590b5" path="/var/lib/kubelet/pods/ab1986e5-82d1-4ad7-b0db-f3bd67c590b5/volumes" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.242373 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.336241 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hshxx\" (UniqueName: \"kubernetes.io/projected/6f117767-125a-4a83-b88d-f910dfc59ea4-kube-api-access-hshxx\") pod \"6f117767-125a-4a83-b88d-f910dfc59ea4\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.336379 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f117767-125a-4a83-b88d-f910dfc59ea4-host\") pod \"6f117767-125a-4a83-b88d-f910dfc59ea4\" (UID: \"6f117767-125a-4a83-b88d-f910dfc59ea4\") " Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.336510 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6f117767-125a-4a83-b88d-f910dfc59ea4-host" (OuterVolumeSpecName: "host") pod "6f117767-125a-4a83-b88d-f910dfc59ea4" (UID: "6f117767-125a-4a83-b88d-f910dfc59ea4"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.337018 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6f117767-125a-4a83-b88d-f910dfc59ea4-host\") on node \"crc\" DevicePath \"\"" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.341687 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f117767-125a-4a83-b88d-f910dfc59ea4-kube-api-access-hshxx" (OuterVolumeSpecName: "kube-api-access-hshxx") pod "6f117767-125a-4a83-b88d-f910dfc59ea4" (UID: "6f117767-125a-4a83-b88d-f910dfc59ea4"). InnerVolumeSpecName "kube-api-access-hshxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.438424 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hshxx\" (UniqueName: \"kubernetes.io/projected/6f117767-125a-4a83-b88d-f910dfc59ea4-kube-api-access-hshxx\") on node \"crc\" DevicePath \"\"" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.532014 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/util/0.log" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.722660 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/util/0.log" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.724751 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/pull/0.log" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.728614 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/pull/0.log" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.877226 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/extract/0.log" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.877679 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/pull/0.log" Sep 30 10:45:06 crc kubenswrapper[4970]: I0930 10:45:06.895473 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/util/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.056673 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-fdjgr_d288c95d-759c-4b29-8be6-304869f99ae7/kube-rbac-proxy/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.094149 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-fdjgr_d288c95d-759c-4b29-8be6-304869f99ae7/manager/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.109451 4970 scope.go:117] "RemoveContainer" containerID="99507b44b0d7f5d90c1390df045ccf164516d99c1f0a182bfed56ec1874d8cd9" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.109470 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/crc-debug-xfwgk" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.130841 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-ws6gj_7131ae21-9827-4028-9841-fbc480e7b938/kube-rbac-proxy/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.263696 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-ws6gj_7131ae21-9827-4028-9841-fbc480e7b938/manager/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.322720 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-pf2ph_c9a40f4a-1de7-45da-91e9-4f11637452b2/kube-rbac-proxy/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.334587 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-pf2ph_c9a40f4a-1de7-45da-91e9-4f11637452b2/manager/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.466526 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-q2llj_b611fd3e-a529-4c90-8e81-c7352004d62f/kube-rbac-proxy/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.613535 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-q2llj_b611fd3e-a529-4c90-8e81-c7352004d62f/manager/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.647892 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-ckjvw_908cf55d-1ac7-4814-9f4e-ddb57acb1b76/kube-rbac-proxy/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.687206 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f117767-125a-4a83-b88d-f910dfc59ea4" path="/var/lib/kubelet/pods/6f117767-125a-4a83-b88d-f910dfc59ea4/volumes" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.690696 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-ckjvw_908cf55d-1ac7-4814-9f4e-ddb57acb1b76/manager/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.797635 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-7bcpp_0dab040d-a74a-48f1-b2e5-fb2fe6de3b58/kube-rbac-proxy/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.897852 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-7bcpp_0dab040d-a74a-48f1-b2e5-fb2fe6de3b58/manager/0.log" Sep 30 10:45:07 crc kubenswrapper[4970]: I0930 10:45:07.959184 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-svx8h_b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7/kube-rbac-proxy/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.128753 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-svx8h_b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7/manager/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.130396 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-js7xj_cefaa649-872b-43be-9763-85ee950bb5d6/kube-rbac-proxy/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.184152 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-js7xj_cefaa649-872b-43be-9763-85ee950bb5d6/manager/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.334256 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-vjpqd_9d9bdcb3-a944-4379-8dfd-858a022e946a/kube-rbac-proxy/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.393894 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-vjpqd_9d9bdcb3-a944-4379-8dfd-858a022e946a/manager/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.497309 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-rjk8q_ae58a1aa-0503-4387-91cf-fc6f396a180f/kube-rbac-proxy/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.515115 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-rjk8q_ae58a1aa-0503-4387-91cf-fc6f396a180f/manager/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.601924 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-vwkw2_1b1a92f2-46aa-492c-906b-1b86c58ba818/kube-rbac-proxy/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.682504 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-vwkw2_1b1a92f2-46aa-492c-906b-1b86c58ba818/manager/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.785723 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-8m95z_0283cb68-98f4-4dcf-99c0-55ebc251dc19/kube-rbac-proxy/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.821281 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-8m95z_0283cb68-98f4-4dcf-99c0-55ebc251dc19/manager/0.log" Sep 30 10:45:08 crc kubenswrapper[4970]: I0930 10:45:08.928805 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-74r7d_815b1df3-7d86-407a-a793-baec392c0f76/kube-rbac-proxy/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.048770 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-p9fxz_b4a0b16f-5d81-4236-850f-03f628bb3595/kube-rbac-proxy/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.049026 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-74r7d_815b1df3-7d86-407a-a793-baec392c0f76/manager/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.174326 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-p9fxz_b4a0b16f-5d81-4236-850f-03f628bb3595/manager/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.215427 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-gnxmq_40f541c2-3a4e-48ec-a01f-a3d395202085/manager/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.240521 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-gnxmq_40f541c2-3a4e-48ec-a01f-a3d395202085/kube-rbac-proxy/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.391262 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d64b45c9c-7q8rq_b33f5230-0a43-418a-a25c-690de07ddc21/kube-rbac-proxy/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.736347 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d8c4cd779-pt9l2_814b0f3a-2bb6-45ba-a6c2-f798b43d4494/kube-rbac-proxy/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.773566 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d8c4cd779-pt9l2_814b0f3a-2bb6-45ba-a6c2-f798b43d4494/operator/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.898736 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xv9kb_8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef/registry-server/0.log" Sep 30 10:45:09 crc kubenswrapper[4970]: I0930 10:45:09.959210 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-ss9vs_db952a6d-9ea1-482e-aec3-7a93fcd6587c/kube-rbac-proxy/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.082680 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-ss9vs_db952a6d-9ea1-482e-aec3-7a93fcd6587c/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.169072 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-s6hpz_116a4b20-5a9a-4456-8816-637e0740a792/kube-rbac-proxy/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.212979 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-s6hpz_116a4b20-5a9a-4456-8816-637e0740a792/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.394583 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-79d8469568-kpw26_7f744173-6696-4797-a55c-85b498bff4da/operator/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.464570 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-v5qrd_eea4d20f-1d77-4e9b-bbc3-644ff1a5a314/kube-rbac-proxy/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.526303 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d64b45c9c-7q8rq_b33f5230-0a43-418a-a25c-690de07ddc21/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.561958 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-v5qrd_eea4d20f-1d77-4e9b-bbc3-644ff1a5a314/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.573162 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-bdcpn_5cfa1456-1b45-4385-8fc5-27dccef45958/kube-rbac-proxy/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.707003 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-bdcpn_5cfa1456-1b45-4385-8fc5-27dccef45958/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.747143 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-sfggm_7f9f19d7-d284-4757-94a1-1a86a8f28b17/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.749795 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-sfggm_7f9f19d7-d284-4757-94a1-1a86a8f28b17/kube-rbac-proxy/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.886557 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-gtp7j_527884ff-dc23-4a9d-8911-aedf784b5eb1/manager/0.log" Sep 30 10:45:10 crc kubenswrapper[4970]: I0930 10:45:10.900782 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-gtp7j_527884ff-dc23-4a9d-8911-aedf784b5eb1/kube-rbac-proxy/0.log" Sep 30 10:45:26 crc kubenswrapper[4970]: I0930 10:45:26.461940 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-65v4s_e0b59dab-c4d7-4baa-9811-f29d7b19be0b/control-plane-machine-set-operator/0.log" Sep 30 10:45:26 crc kubenswrapper[4970]: I0930 10:45:26.537640 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b78lb_f2267d30-75c6-4002-ae56-b623dc6d7e42/kube-rbac-proxy/0.log" Sep 30 10:45:26 crc kubenswrapper[4970]: I0930 10:45:26.632019 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b78lb_f2267d30-75c6-4002-ae56-b623dc6d7e42/machine-api-operator/0.log" Sep 30 10:45:38 crc kubenswrapper[4970]: I0930 10:45:38.779411 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-lzk5z_5d792ad1-1442-40dc-a7d1-df5284e06e35/cert-manager-controller/0.log" Sep 30 10:45:38 crc kubenswrapper[4970]: I0930 10:45:38.924039 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8zhcv_56eac2ba-1797-44ac-9f39-83f71a6f689d/cert-manager-cainjector/0.log" Sep 30 10:45:38 crc kubenswrapper[4970]: I0930 10:45:38.969315 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l8qdh_ee427339-b272-4768-bb9d-27fb3e8eab0e/cert-manager-webhook/0.log" Sep 30 10:45:50 crc kubenswrapper[4970]: I0930 10:45:50.674810 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-864bb6dfb5-z745b_1b05f65e-1145-40c4-a5cb-e07766072045/nmstate-console-plugin/0.log" Sep 30 10:45:50 crc kubenswrapper[4970]: I0930 10:45:50.835113 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-92md2_cf79d047-21bc-461c-a5c7-7c12104fbf35/nmstate-handler/0.log" Sep 30 10:45:50 crc kubenswrapper[4970]: I0930 10:45:50.917013 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-xtzzj_42b7f1da-5493-4471-980a-a87efdd8eda2/kube-rbac-proxy/0.log" Sep 30 10:45:50 crc kubenswrapper[4970]: I0930 10:45:50.934357 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-xtzzj_42b7f1da-5493-4471-980a-a87efdd8eda2/nmstate-metrics/0.log" Sep 30 10:45:51 crc kubenswrapper[4970]: I0930 10:45:51.057758 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5d6f6cfd66-lfwqt_f65ea665-ce1c-4197-ae02-5810c62f1355/nmstate-operator/0.log" Sep 30 10:45:51 crc kubenswrapper[4970]: I0930 10:45:51.273863 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6d689559c5-ztvnw_15cbe10c-fb64-4630-bd5b-fd50c2c07d64/nmstate-webhook/0.log" Sep 30 10:46:02 crc kubenswrapper[4970]: I0930 10:46:02.697471 4970 scope.go:117] "RemoveContainer" containerID="5e9db830f3e303016aa8ac1d254c49a6bc1112e347878fa38956ceadddab9092" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.306033 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-vzmvh_0c72cc58-2ee8-414b-a656-a2623e1664f0/kube-rbac-proxy/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.358056 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-vzmvh_0c72cc58-2ee8-414b-a656-a2623e1664f0/controller/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.525000 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-5478bdb765-p7rc4_7c5f78f9-5ebd-434d-82c0-df6af4bc483b/frr-k8s-webhook-server/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.548224 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.711699 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.714140 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.749862 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.756555 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.822015 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.822070 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.906965 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.925746 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.946936 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:46:04 crc kubenswrapper[4970]: I0930 10:46:04.952275 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.118374 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.122533 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.145959 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.196051 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/controller/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.292535 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/frr-metrics/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.322202 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/kube-rbac-proxy/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.396933 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/kube-rbac-proxy-frr/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.501316 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/reloader/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.610738 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5689865b7f-lzf5z_44474490-8653-4ad2-8ae3-d4e089664fb8/manager/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.770945 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79d5d6bd79-dmktk_0365d978-934a-4079-98be-d612928d9496/webhook-server/0.log" Sep 30 10:46:05 crc kubenswrapper[4970]: I0930 10:46:05.939172 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f6gvx_fef9dca8-f780-4d0b-b7b8-68cd4f13de1a/kube-rbac-proxy/0.log" Sep 30 10:46:06 crc kubenswrapper[4970]: I0930 10:46:06.465669 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f6gvx_fef9dca8-f780-4d0b-b7b8-68cd4f13de1a/speaker/0.log" Sep 30 10:46:06 crc kubenswrapper[4970]: I0930 10:46:06.749520 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/frr/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.048072 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/util/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.192503 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/util/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.202672 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/pull/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.215661 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/pull/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.378914 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/util/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.390232 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/pull/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.415807 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/extract/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.524198 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-utilities/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.753375 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-utilities/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.758434 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-content/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.763240 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-content/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.908552 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-utilities/0.log" Sep 30 10:46:18 crc kubenswrapper[4970]: I0930 10:46:18.922619 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-content/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.152878 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-utilities/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.325947 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/registry-server/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.341976 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-utilities/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.373526 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-content/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.373907 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-content/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.572680 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-content/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.660262 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-utilities/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.766724 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/util/0.log" Sep 30 10:46:19 crc kubenswrapper[4970]: I0930 10:46:19.989435 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/util/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.011682 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/pull/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.106628 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/pull/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.129836 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/registry-server/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.219897 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/util/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.247908 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/extract/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.252814 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/pull/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.425242 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mc94p_d71db2c5-c1c2-42f9-a89e-086c606b9e5f/marketplace-operator/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.475711 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-utilities/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.659267 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-utilities/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.661888 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-content/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.675851 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-content/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.844133 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-content/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.848491 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-utilities/0.log" Sep 30 10:46:20 crc kubenswrapper[4970]: I0930 10:46:20.969606 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/registry-server/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.047050 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/extract-utilities/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.177402 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/extract-utilities/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.210510 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/extract-content/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.219245 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/extract-content/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.403712 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/extract-content/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.410739 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/extract-utilities/0.log" Sep 30 10:46:21 crc kubenswrapper[4970]: I0930 10:46:21.890743 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-pk6ch_95230c16-a1df-4406-8b31-e350c1981055/registry-server/0.log" Sep 30 10:46:34 crc kubenswrapper[4970]: I0930 10:46:34.821431 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:46:34 crc kubenswrapper[4970]: I0930 10:46:34.822030 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.187537 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vhmrb"] Sep 30 10:47:03 crc kubenswrapper[4970]: E0930 10:47:03.188473 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f117767-125a-4a83-b88d-f910dfc59ea4" containerName="container-00" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.188490 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f117767-125a-4a83-b88d-f910dfc59ea4" containerName="container-00" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.188713 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f117767-125a-4a83-b88d-f910dfc59ea4" containerName="container-00" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.190223 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.211065 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhmrb"] Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.339850 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68tmv\" (UniqueName: \"kubernetes.io/projected/53479960-137c-4f2e-88be-b708ada9056f-kube-api-access-68tmv\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.340100 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53479960-137c-4f2e-88be-b708ada9056f-utilities\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.340155 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53479960-137c-4f2e-88be-b708ada9056f-catalog-content\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.441451 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53479960-137c-4f2e-88be-b708ada9056f-utilities\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.441898 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53479960-137c-4f2e-88be-b708ada9056f-utilities\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.441975 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53479960-137c-4f2e-88be-b708ada9056f-catalog-content\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.442189 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53479960-137c-4f2e-88be-b708ada9056f-catalog-content\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.442313 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68tmv\" (UniqueName: \"kubernetes.io/projected/53479960-137c-4f2e-88be-b708ada9056f-kube-api-access-68tmv\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.474057 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68tmv\" (UniqueName: \"kubernetes.io/projected/53479960-137c-4f2e-88be-b708ada9056f-kube-api-access-68tmv\") pod \"redhat-operators-vhmrb\" (UID: \"53479960-137c-4f2e-88be-b708ada9056f\") " pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:03 crc kubenswrapper[4970]: I0930 10:47:03.510122 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.002045 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhmrb"] Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.221937 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhmrb" event={"ID":"53479960-137c-4f2e-88be-b708ada9056f","Type":"ContainerStarted","Data":"644c8d0d752899fbd44a76604e34ef9e5a52a0cacad03528989d77fead3984ac"} Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.821695 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.821757 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.821806 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.822647 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09447c97669fe66df87c231f3f61ffa87f4993dc0b0714ceaa7aee2d0c6465bc"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:47:04 crc kubenswrapper[4970]: I0930 10:47:04.822727 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://09447c97669fe66df87c231f3f61ffa87f4993dc0b0714ceaa7aee2d0c6465bc" gracePeriod=600 Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.243115 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="09447c97669fe66df87c231f3f61ffa87f4993dc0b0714ceaa7aee2d0c6465bc" exitCode=0 Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.243179 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"09447c97669fe66df87c231f3f61ffa87f4993dc0b0714ceaa7aee2d0c6465bc"} Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.243508 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2"} Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.243528 4970 scope.go:117] "RemoveContainer" containerID="cd7ac0fd27cd394ce6d4fe4e9841e138fed09e8478e625db519ace4fbbcd4a88" Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.246480 4970 generic.go:334] "Generic (PLEG): container finished" podID="53479960-137c-4f2e-88be-b708ada9056f" containerID="a54a9cdda5a1fea796bf1110e17d0e988b4d3b04156b67587e6ffae05f20acd3" exitCode=0 Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.246520 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhmrb" event={"ID":"53479960-137c-4f2e-88be-b708ada9056f","Type":"ContainerDied","Data":"a54a9cdda5a1fea796bf1110e17d0e988b4d3b04156b67587e6ffae05f20acd3"} Sep 30 10:47:05 crc kubenswrapper[4970]: I0930 10:47:05.250541 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:47:16 crc kubenswrapper[4970]: I0930 10:47:16.351411 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhmrb" event={"ID":"53479960-137c-4f2e-88be-b708ada9056f","Type":"ContainerStarted","Data":"c6a669b9c7f061e3543800d31951f296f1b9afe50bc7d74e768ba5ca629ce980"} Sep 30 10:47:17 crc kubenswrapper[4970]: I0930 10:47:17.361463 4970 generic.go:334] "Generic (PLEG): container finished" podID="53479960-137c-4f2e-88be-b708ada9056f" containerID="c6a669b9c7f061e3543800d31951f296f1b9afe50bc7d74e768ba5ca629ce980" exitCode=0 Sep 30 10:47:17 crc kubenswrapper[4970]: I0930 10:47:17.361513 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhmrb" event={"ID":"53479960-137c-4f2e-88be-b708ada9056f","Type":"ContainerDied","Data":"c6a669b9c7f061e3543800d31951f296f1b9afe50bc7d74e768ba5ca629ce980"} Sep 30 10:47:19 crc kubenswrapper[4970]: I0930 10:47:19.382175 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhmrb" event={"ID":"53479960-137c-4f2e-88be-b708ada9056f","Type":"ContainerStarted","Data":"687185fac01502e84db7bdf1c126a857966228f9340f3b4ad90689dc2b2d7a8e"} Sep 30 10:47:19 crc kubenswrapper[4970]: I0930 10:47:19.406449 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vhmrb" podStartSLOduration=2.889886842 podStartE2EDuration="16.406423912s" podCreationTimestamp="2025-09-30 10:47:03 +0000 UTC" firstStartedPulling="2025-09-30 10:47:05.250165022 +0000 UTC m=+3638.322015976" lastFinishedPulling="2025-09-30 10:47:18.766702112 +0000 UTC m=+3651.838553046" observedRunningTime="2025-09-30 10:47:19.398063472 +0000 UTC m=+3652.469914406" watchObservedRunningTime="2025-09-30 10:47:19.406423912 +0000 UTC m=+3652.478274846" Sep 30 10:47:23 crc kubenswrapper[4970]: I0930 10:47:23.510492 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:23 crc kubenswrapper[4970]: I0930 10:47:23.511159 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:24 crc kubenswrapper[4970]: I0930 10:47:24.564354 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vhmrb" podUID="53479960-137c-4f2e-88be-b708ada9056f" containerName="registry-server" probeResult="failure" output=< Sep 30 10:47:24 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Sep 30 10:47:24 crc kubenswrapper[4970]: > Sep 30 10:47:33 crc kubenswrapper[4970]: I0930 10:47:33.567265 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:33 crc kubenswrapper[4970]: I0930 10:47:33.622943 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vhmrb" Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.222362 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhmrb"] Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.391630 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pk6ch"] Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.393620 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pk6ch" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="registry-server" containerID="cri-o://4c1be12ffd710043b398885315bdffc12465177db26e23e3624fce9b9bd2363f" gracePeriod=2 Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.528942 4970 generic.go:334] "Generic (PLEG): container finished" podID="95230c16-a1df-4406-8b31-e350c1981055" containerID="4c1be12ffd710043b398885315bdffc12465177db26e23e3624fce9b9bd2363f" exitCode=0 Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.529827 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk6ch" event={"ID":"95230c16-a1df-4406-8b31-e350c1981055","Type":"ContainerDied","Data":"4c1be12ffd710043b398885315bdffc12465177db26e23e3624fce9b9bd2363f"} Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.826819 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.948033 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-utilities\") pod \"95230c16-a1df-4406-8b31-e350c1981055\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.948193 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-catalog-content\") pod \"95230c16-a1df-4406-8b31-e350c1981055\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.948540 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-utilities" (OuterVolumeSpecName: "utilities") pod "95230c16-a1df-4406-8b31-e350c1981055" (UID: "95230c16-a1df-4406-8b31-e350c1981055"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.951229 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpgrw\" (UniqueName: \"kubernetes.io/projected/95230c16-a1df-4406-8b31-e350c1981055-kube-api-access-kpgrw\") pod \"95230c16-a1df-4406-8b31-e350c1981055\" (UID: \"95230c16-a1df-4406-8b31-e350c1981055\") " Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.951752 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:47:34 crc kubenswrapper[4970]: I0930 10:47:34.974614 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95230c16-a1df-4406-8b31-e350c1981055-kube-api-access-kpgrw" (OuterVolumeSpecName: "kube-api-access-kpgrw") pod "95230c16-a1df-4406-8b31-e350c1981055" (UID: "95230c16-a1df-4406-8b31-e350c1981055"). InnerVolumeSpecName "kube-api-access-kpgrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.026038 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95230c16-a1df-4406-8b31-e350c1981055" (UID: "95230c16-a1df-4406-8b31-e350c1981055"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.052612 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpgrw\" (UniqueName: \"kubernetes.io/projected/95230c16-a1df-4406-8b31-e350c1981055-kube-api-access-kpgrw\") on node \"crc\" DevicePath \"\"" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.052655 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95230c16-a1df-4406-8b31-e350c1981055-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.539323 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pk6ch" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.539331 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pk6ch" event={"ID":"95230c16-a1df-4406-8b31-e350c1981055","Type":"ContainerDied","Data":"894ae25a4379ba49703b01916ebc815adf6c094b9a51b841ca3dda4dd1b43257"} Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.539424 4970 scope.go:117] "RemoveContainer" containerID="4c1be12ffd710043b398885315bdffc12465177db26e23e3624fce9b9bd2363f" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.560885 4970 scope.go:117] "RemoveContainer" containerID="f37c7f8db310539953aa04392c292947977e9177cbc7ba321c306f5d9b405439" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.585912 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pk6ch"] Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.601845 4970 scope.go:117] "RemoveContainer" containerID="65f2fdb0f937a6ad0033a2e83c6fd2deccf15097b87723328f40f6975bc2813d" Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.603378 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pk6ch"] Sep 30 10:47:35 crc kubenswrapper[4970]: I0930 10:47:35.680234 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95230c16-a1df-4406-8b31-e350c1981055" path="/var/lib/kubelet/pods/95230c16-a1df-4406-8b31-e350c1981055/volumes" Sep 30 10:48:17 crc kubenswrapper[4970]: I0930 10:48:17.027305 4970 generic.go:334] "Generic (PLEG): container finished" podID="aae4be87-4f8b-4024-bfff-f07824adde63" containerID="23536f499743797a9a66962965dfb4d2cb9fe2d793cc9064ba720eaae77310c7" exitCode=0 Sep 30 10:48:17 crc kubenswrapper[4970]: I0930 10:48:17.027408 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" event={"ID":"aae4be87-4f8b-4024-bfff-f07824adde63","Type":"ContainerDied","Data":"23536f499743797a9a66962965dfb4d2cb9fe2d793cc9064ba720eaae77310c7"} Sep 30 10:48:17 crc kubenswrapper[4970]: I0930 10:48:17.028331 4970 scope.go:117] "RemoveContainer" containerID="23536f499743797a9a66962965dfb4d2cb9fe2d793cc9064ba720eaae77310c7" Sep 30 10:48:17 crc kubenswrapper[4970]: I0930 10:48:17.322541 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vb8xq_must-gather-r6xqq_aae4be87-4f8b-4024-bfff-f07824adde63/gather/0.log" Sep 30 10:48:26 crc kubenswrapper[4970]: I0930 10:48:26.610745 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vb8xq/must-gather-r6xqq"] Sep 30 10:48:26 crc kubenswrapper[4970]: I0930 10:48:26.611621 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="copy" containerID="cri-o://b3422e91cd626e467332925b20321111e9acfb8a2178171ba72cbba59b50160d" gracePeriod=2 Sep 30 10:48:26 crc kubenswrapper[4970]: I0930 10:48:26.620441 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vb8xq/must-gather-r6xqq"] Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.119922 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vb8xq_must-gather-r6xqq_aae4be87-4f8b-4024-bfff-f07824adde63/copy/0.log" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.120600 4970 generic.go:334] "Generic (PLEG): container finished" podID="aae4be87-4f8b-4024-bfff-f07824adde63" containerID="b3422e91cd626e467332925b20321111e9acfb8a2178171ba72cbba59b50160d" exitCode=143 Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.120659 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc227470ccda2cb88c79597c7cbe495a3abb3aeaaebdd635da64c6a49ab4c2c" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.175152 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vb8xq_must-gather-r6xqq_aae4be87-4f8b-4024-bfff-f07824adde63/copy/0.log" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.175544 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.222638 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aae4be87-4f8b-4024-bfff-f07824adde63-must-gather-output\") pod \"aae4be87-4f8b-4024-bfff-f07824adde63\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.222715 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b47h8\" (UniqueName: \"kubernetes.io/projected/aae4be87-4f8b-4024-bfff-f07824adde63-kube-api-access-b47h8\") pod \"aae4be87-4f8b-4024-bfff-f07824adde63\" (UID: \"aae4be87-4f8b-4024-bfff-f07824adde63\") " Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.240253 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae4be87-4f8b-4024-bfff-f07824adde63-kube-api-access-b47h8" (OuterVolumeSpecName: "kube-api-access-b47h8") pod "aae4be87-4f8b-4024-bfff-f07824adde63" (UID: "aae4be87-4f8b-4024-bfff-f07824adde63"). InnerVolumeSpecName "kube-api-access-b47h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.325429 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b47h8\" (UniqueName: \"kubernetes.io/projected/aae4be87-4f8b-4024-bfff-f07824adde63-kube-api-access-b47h8\") on node \"crc\" DevicePath \"\"" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.358184 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aae4be87-4f8b-4024-bfff-f07824adde63-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "aae4be87-4f8b-4024-bfff-f07824adde63" (UID: "aae4be87-4f8b-4024-bfff-f07824adde63"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.427081 4970 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/aae4be87-4f8b-4024-bfff-f07824adde63-must-gather-output\") on node \"crc\" DevicePath \"\"" Sep 30 10:48:27 crc kubenswrapper[4970]: I0930 10:48:27.683365 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" path="/var/lib/kubelet/pods/aae4be87-4f8b-4024-bfff-f07824adde63/volumes" Sep 30 10:48:28 crc kubenswrapper[4970]: I0930 10:48:28.130063 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vb8xq/must-gather-r6xqq" Sep 30 10:48:30 crc kubenswrapper[4970]: I0930 10:48:30.937514 4970 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-597dc56955-zfx9s" podUID="1ade2be1-8027-4a99-ae4d-f0394e4d9c1d" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Sep 30 10:49:02 crc kubenswrapper[4970]: I0930 10:49:02.865917 4970 scope.go:117] "RemoveContainer" containerID="23536f499743797a9a66962965dfb4d2cb9fe2d793cc9064ba720eaae77310c7" Sep 30 10:49:02 crc kubenswrapper[4970]: I0930 10:49:02.902222 4970 scope.go:117] "RemoveContainer" containerID="b3422e91cd626e467332925b20321111e9acfb8a2178171ba72cbba59b50160d" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.274619 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk4t7/must-gather-6pc88"] Sep 30 10:49:11 crc kubenswrapper[4970]: E0930 10:49:11.275454 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="registry-server" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275466 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="registry-server" Sep 30 10:49:11 crc kubenswrapper[4970]: E0930 10:49:11.275480 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="extract-utilities" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275486 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="extract-utilities" Sep 30 10:49:11 crc kubenswrapper[4970]: E0930 10:49:11.275496 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="gather" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275503 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="gather" Sep 30 10:49:11 crc kubenswrapper[4970]: E0930 10:49:11.275525 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="copy" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275531 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="copy" Sep 30 10:49:11 crc kubenswrapper[4970]: E0930 10:49:11.275548 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="extract-content" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275554 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="extract-content" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275728 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="95230c16-a1df-4406-8b31-e350c1981055" containerName="registry-server" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275738 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="gather" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.275762 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae4be87-4f8b-4024-bfff-f07824adde63" containerName="copy" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.276706 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.278885 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xk4t7"/"openshift-service-ca.crt" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.278891 4970 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-xk4t7"/"default-dockercfg-k8r65" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.279841 4970 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-xk4t7"/"kube-root-ca.crt" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.318549 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xk4t7/must-gather-6pc88"] Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.440273 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-must-gather-output\") pod \"must-gather-6pc88\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.440375 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktxtw\" (UniqueName: \"kubernetes.io/projected/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-kube-api-access-ktxtw\") pod \"must-gather-6pc88\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.542174 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktxtw\" (UniqueName: \"kubernetes.io/projected/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-kube-api-access-ktxtw\") pod \"must-gather-6pc88\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.542532 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-must-gather-output\") pod \"must-gather-6pc88\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.542960 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-must-gather-output\") pod \"must-gather-6pc88\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.561536 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktxtw\" (UniqueName: \"kubernetes.io/projected/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-kube-api-access-ktxtw\") pod \"must-gather-6pc88\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:11 crc kubenswrapper[4970]: I0930 10:49:11.601976 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:49:12 crc kubenswrapper[4970]: I0930 10:49:12.638845 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-xk4t7/must-gather-6pc88"] Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.276956 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tvx7d"] Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.279463 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.300701 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvx7d"] Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.424225 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-catalog-content\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.424635 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjkc\" (UniqueName: \"kubernetes.io/projected/d1f2b79f-c9c0-438b-ba9e-30f73340e281-kube-api-access-bkjkc\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.424792 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-utilities\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.526585 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-utilities\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.526862 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-catalog-content\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.527025 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjkc\" (UniqueName: \"kubernetes.io/projected/d1f2b79f-c9c0-438b-ba9e-30f73340e281-kube-api-access-bkjkc\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.527263 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-utilities\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.527376 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-catalog-content\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.558091 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjkc\" (UniqueName: \"kubernetes.io/projected/d1f2b79f-c9c0-438b-ba9e-30f73340e281-kube-api-access-bkjkc\") pod \"certified-operators-tvx7d\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.608315 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.626616 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/must-gather-6pc88" event={"ID":"8bd969ad-610a-4fa4-a8e9-39ec6f63589e","Type":"ContainerStarted","Data":"4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228"} Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.626657 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/must-gather-6pc88" event={"ID":"8bd969ad-610a-4fa4-a8e9-39ec6f63589e","Type":"ContainerStarted","Data":"765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e"} Sep 30 10:49:13 crc kubenswrapper[4970]: I0930 10:49:13.626667 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/must-gather-6pc88" event={"ID":"8bd969ad-610a-4fa4-a8e9-39ec6f63589e","Type":"ContainerStarted","Data":"0a19a878b4a948191c012b6e62413fbe13cc8080c3beb4a315d86bd99223e6c2"} Sep 30 10:49:14 crc kubenswrapper[4970]: I0930 10:49:14.080512 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xk4t7/must-gather-6pc88" podStartSLOduration=3.080495199 podStartE2EDuration="3.080495199s" podCreationTimestamp="2025-09-30 10:49:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:49:13.648138264 +0000 UTC m=+3766.719989188" watchObservedRunningTime="2025-09-30 10:49:14.080495199 +0000 UTC m=+3767.152346133" Sep 30 10:49:14 crc kubenswrapper[4970]: I0930 10:49:14.088582 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tvx7d"] Sep 30 10:49:14 crc kubenswrapper[4970]: I0930 10:49:14.636391 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerID="1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1" exitCode=0 Sep 30 10:49:14 crc kubenswrapper[4970]: I0930 10:49:14.636465 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerDied","Data":"1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1"} Sep 30 10:49:14 crc kubenswrapper[4970]: I0930 10:49:14.636720 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerStarted","Data":"58125a7373a874a1674140e4137f6809b75f2f188b70b1526de675344a7b461a"} Sep 30 10:49:16 crc kubenswrapper[4970]: I0930 10:49:16.674630 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerStarted","Data":"e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc"} Sep 30 10:49:16 crc kubenswrapper[4970]: I0930 10:49:16.994425 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-f78f6"] Sep 30 10:49:16 crc kubenswrapper[4970]: I0930 10:49:16.996377 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.000618 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96ae6ec6-de12-4067-992f-800d13e06611-host\") pod \"crc-debug-f78f6\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.000679 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqbnh\" (UniqueName: \"kubernetes.io/projected/96ae6ec6-de12-4067-992f-800d13e06611-kube-api-access-kqbnh\") pod \"crc-debug-f78f6\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.102111 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96ae6ec6-de12-4067-992f-800d13e06611-host\") pod \"crc-debug-f78f6\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.102201 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqbnh\" (UniqueName: \"kubernetes.io/projected/96ae6ec6-de12-4067-992f-800d13e06611-kube-api-access-kqbnh\") pod \"crc-debug-f78f6\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.102220 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96ae6ec6-de12-4067-992f-800d13e06611-host\") pod \"crc-debug-f78f6\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.121511 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqbnh\" (UniqueName: \"kubernetes.io/projected/96ae6ec6-de12-4067-992f-800d13e06611-kube-api-access-kqbnh\") pod \"crc-debug-f78f6\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.313579 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:49:17 crc kubenswrapper[4970]: W0930 10:49:17.340080 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96ae6ec6_de12_4067_992f_800d13e06611.slice/crio-b3c8c1a382720cb223f87f5874ad487f5ff5b7fa3ddb6c9edf3de6a9499783d6 WatchSource:0}: Error finding container b3c8c1a382720cb223f87f5874ad487f5ff5b7fa3ddb6c9edf3de6a9499783d6: Status 404 returned error can't find the container with id b3c8c1a382720cb223f87f5874ad487f5ff5b7fa3ddb6c9edf3de6a9499783d6 Sep 30 10:49:17 crc kubenswrapper[4970]: I0930 10:49:17.683494 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" event={"ID":"96ae6ec6-de12-4067-992f-800d13e06611","Type":"ContainerStarted","Data":"b3c8c1a382720cb223f87f5874ad487f5ff5b7fa3ddb6c9edf3de6a9499783d6"} Sep 30 10:49:19 crc kubenswrapper[4970]: I0930 10:49:19.698497 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" event={"ID":"96ae6ec6-de12-4067-992f-800d13e06611","Type":"ContainerStarted","Data":"658f37273f47ac14415581b069e398db7b06b6026ce5cf52d4923ebfc8c1663c"} Sep 30 10:49:20 crc kubenswrapper[4970]: I0930 10:49:20.723406 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" podStartSLOduration=4.7233868 podStartE2EDuration="4.7233868s" podCreationTimestamp="2025-09-30 10:49:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:49:20.719266606 +0000 UTC m=+3773.791117530" watchObservedRunningTime="2025-09-30 10:49:20.7233868 +0000 UTC m=+3773.795237734" Sep 30 10:49:21 crc kubenswrapper[4970]: I0930 10:49:21.718102 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerID="e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc" exitCode=0 Sep 30 10:49:21 crc kubenswrapper[4970]: I0930 10:49:21.718213 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerDied","Data":"e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc"} Sep 30 10:49:22 crc kubenswrapper[4970]: I0930 10:49:22.728899 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerStarted","Data":"1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb"} Sep 30 10:49:23 crc kubenswrapper[4970]: I0930 10:49:23.608562 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:23 crc kubenswrapper[4970]: I0930 10:49:23.613138 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:24 crc kubenswrapper[4970]: I0930 10:49:24.659453 4970 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tvx7d" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="registry-server" probeResult="failure" output=< Sep 30 10:49:24 crc kubenswrapper[4970]: timeout: failed to connect service ":50051" within 1s Sep 30 10:49:24 crc kubenswrapper[4970]: > Sep 30 10:49:33 crc kubenswrapper[4970]: I0930 10:49:33.667176 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:33 crc kubenswrapper[4970]: I0930 10:49:33.688135 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tvx7d" podStartSLOduration=13.16975457 podStartE2EDuration="20.688115487s" podCreationTimestamp="2025-09-30 10:49:13 +0000 UTC" firstStartedPulling="2025-09-30 10:49:14.6399505 +0000 UTC m=+3767.711801434" lastFinishedPulling="2025-09-30 10:49:22.158311417 +0000 UTC m=+3775.230162351" observedRunningTime="2025-09-30 10:49:22.755100926 +0000 UTC m=+3775.826951870" watchObservedRunningTime="2025-09-30 10:49:33.688115487 +0000 UTC m=+3786.759966421" Sep 30 10:49:33 crc kubenswrapper[4970]: I0930 10:49:33.734316 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:33 crc kubenswrapper[4970]: I0930 10:49:33.912180 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvx7d"] Sep 30 10:49:34 crc kubenswrapper[4970]: I0930 10:49:34.820868 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:49:34 crc kubenswrapper[4970]: I0930 10:49:34.821379 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:49:34 crc kubenswrapper[4970]: I0930 10:49:34.846024 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tvx7d" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="registry-server" containerID="cri-o://1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb" gracePeriod=2 Sep 30 10:49:34 crc kubenswrapper[4970]: E0930 10:49:34.930877 4970 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f2b79f_c9c0_438b_ba9e_30f73340e281.slice/crio-conmon-1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f2b79f_c9c0_438b_ba9e_30f73340e281.slice/crio-1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb.scope\": RecentStats: unable to find data in memory cache]" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.344868 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.460252 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-utilities\") pod \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.460810 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-catalog-content\") pod \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.460914 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkjkc\" (UniqueName: \"kubernetes.io/projected/d1f2b79f-c9c0-438b-ba9e-30f73340e281-kube-api-access-bkjkc\") pod \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\" (UID: \"d1f2b79f-c9c0-438b-ba9e-30f73340e281\") " Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.461126 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-utilities" (OuterVolumeSpecName: "utilities") pod "d1f2b79f-c9c0-438b-ba9e-30f73340e281" (UID: "d1f2b79f-c9c0-438b-ba9e-30f73340e281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.461478 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.488588 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f2b79f-c9c0-438b-ba9e-30f73340e281-kube-api-access-bkjkc" (OuterVolumeSpecName: "kube-api-access-bkjkc") pod "d1f2b79f-c9c0-438b-ba9e-30f73340e281" (UID: "d1f2b79f-c9c0-438b-ba9e-30f73340e281"). InnerVolumeSpecName "kube-api-access-bkjkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.509697 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1f2b79f-c9c0-438b-ba9e-30f73340e281" (UID: "d1f2b79f-c9c0-438b-ba9e-30f73340e281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.563872 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1f2b79f-c9c0-438b-ba9e-30f73340e281-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.563900 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkjkc\" (UniqueName: \"kubernetes.io/projected/d1f2b79f-c9c0-438b-ba9e-30f73340e281-kube-api-access-bkjkc\") on node \"crc\" DevicePath \"\"" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.859664 4970 generic.go:334] "Generic (PLEG): container finished" podID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerID="1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb" exitCode=0 Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.859725 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerDied","Data":"1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb"} Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.859737 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tvx7d" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.859753 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tvx7d" event={"ID":"d1f2b79f-c9c0-438b-ba9e-30f73340e281","Type":"ContainerDied","Data":"58125a7373a874a1674140e4137f6809b75f2f188b70b1526de675344a7b461a"} Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.859772 4970 scope.go:117] "RemoveContainer" containerID="1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.885455 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tvx7d"] Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.893687 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tvx7d"] Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.895314 4970 scope.go:117] "RemoveContainer" containerID="e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.916280 4970 scope.go:117] "RemoveContainer" containerID="1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.986729 4970 scope.go:117] "RemoveContainer" containerID="1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb" Sep 30 10:49:35 crc kubenswrapper[4970]: E0930 10:49:35.987184 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb\": container with ID starting with 1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb not found: ID does not exist" containerID="1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.987236 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb"} err="failed to get container status \"1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb\": rpc error: code = NotFound desc = could not find container \"1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb\": container with ID starting with 1e4c7950fef1943ddcc374a243a37b4f24a21aa47aa02527323fad18f9c295bb not found: ID does not exist" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.987265 4970 scope.go:117] "RemoveContainer" containerID="e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc" Sep 30 10:49:35 crc kubenswrapper[4970]: E0930 10:49:35.987616 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc\": container with ID starting with e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc not found: ID does not exist" containerID="e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.987643 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc"} err="failed to get container status \"e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc\": rpc error: code = NotFound desc = could not find container \"e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc\": container with ID starting with e9bab68fa9e770a7c88f1a3bdcfc3ef82b00021432793f0fc09e2becccbb11dc not found: ID does not exist" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.987667 4970 scope.go:117] "RemoveContainer" containerID="1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1" Sep 30 10:49:35 crc kubenswrapper[4970]: E0930 10:49:35.988044 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1\": container with ID starting with 1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1 not found: ID does not exist" containerID="1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1" Sep 30 10:49:35 crc kubenswrapper[4970]: I0930 10:49:35.988083 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1"} err="failed to get container status \"1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1\": rpc error: code = NotFound desc = could not find container \"1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1\": container with ID starting with 1370f677f13d3fe10ba8619b042a6bfd974f5bd768ca691c141717b77f0a6ed1 not found: ID does not exist" Sep 30 10:49:37 crc kubenswrapper[4970]: I0930 10:49:37.686227 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" path="/var/lib/kubelet/pods/d1f2b79f-c9c0-438b-ba9e-30f73340e281/volumes" Sep 30 10:50:04 crc kubenswrapper[4970]: I0930 10:50:04.822101 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:50:04 crc kubenswrapper[4970]: I0930 10:50:04.822614 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:50:16 crc kubenswrapper[4970]: I0930 10:50:16.313711 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-868647ddbb-dxwsf_bbcbf5f3-02eb-4969-af25-0c219017b29a/barbican-api/0.log" Sep 30 10:50:16 crc kubenswrapper[4970]: I0930 10:50:16.480136 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-868647ddbb-dxwsf_bbcbf5f3-02eb-4969-af25-0c219017b29a/barbican-api-log/0.log" Sep 30 10:50:16 crc kubenswrapper[4970]: I0930 10:50:16.733477 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bb9f86b-7tl55_cb32bf8e-e046-4e85-87b2-56993b0e6e30/barbican-keystone-listener/0.log" Sep 30 10:50:16 crc kubenswrapper[4970]: I0930 10:50:16.769675 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-56bb9f86b-7tl55_cb32bf8e-e046-4e85-87b2-56993b0e6e30/barbican-keystone-listener-log/0.log" Sep 30 10:50:16 crc kubenswrapper[4970]: I0930 10:50:16.904948 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-767b995857-mf5zx_f4d4c15f-9170-47c3-9716-919828e0cb40/barbican-worker/0.log" Sep 30 10:50:16 crc kubenswrapper[4970]: I0930 10:50:16.955091 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-767b995857-mf5zx_f4d4c15f-9170-47c3-9716-919828e0cb40/barbican-worker-log/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.190051 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-jzmj9_f4b4ad42-77b6-450b-befa-9bb0012fe9ae/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.349365 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/proxy-httpd/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.422648 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/ceilometer-notification-agent/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.426963 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/ceilometer-central-agent/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.525551 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_3dc8250d-7e71-408b-8aa3-947ebe6ef0a1/sg-core/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.701152 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb0312b9-337a-4175-ae77-cd4964578d13/cinder-api/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.709610 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb0312b9-337a-4175-ae77-cd4964578d13/cinder-api-log/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.963706 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7510aa65-ae21-4344-94a7-9354f0822ae3/cinder-scheduler/0.log" Sep 30 10:50:17 crc kubenswrapper[4970]: I0930 10:50:17.967396 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_7510aa65-ae21-4344-94a7-9354f0822ae3/probe/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.145715 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-z8zdd_d70e09c1-47df-4742-b2fc-77c354169b46/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.372170 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-bsd8r_039233e2-0b03-4514-b359-5552e4d09ffc/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.480473 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-6bk6t_90db2984-4178-487d-812f-a51f345ae911/init/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.631222 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-6bk6t_90db2984-4178-487d-812f-a51f345ae911/init/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.745319 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8c6f6df99-6bk6t_90db2984-4178-487d-812f-a51f345ae911/dnsmasq-dns/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.892767 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-77k7p_877ee61c-4abb-4daf-a42d-f2d26afbe137/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:18 crc kubenswrapper[4970]: I0930 10:50:18.991473 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e691ead3-4698-47b1-9ea4-b63f8e649a34/glance-httpd/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.123867 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e691ead3-4698-47b1-9ea4-b63f8e649a34/glance-log/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.229073 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_354c5f9e-ca1b-4724-960f-a376abda6ee2/glance-httpd/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.311326 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_354c5f9e-ca1b-4724-960f-a376abda6ee2/glance-log/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.584545 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8fbb4f9c8-n8t5n_8b479413-73f2-4159-8ec6-5e23f139c53c/horizon/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.719356 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bthwp"] Sep 30 10:50:19 crc kubenswrapper[4970]: E0930 10:50:19.720140 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="extract-content" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.720163 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="extract-content" Sep 30 10:50:19 crc kubenswrapper[4970]: E0930 10:50:19.720183 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="registry-server" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.720192 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="registry-server" Sep 30 10:50:19 crc kubenswrapper[4970]: E0930 10:50:19.720215 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="extract-utilities" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.720225 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="extract-utilities" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.734328 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1f2b79f-c9c0-438b-ba9e-30f73340e281" containerName="registry-server" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.745630 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.773978 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bthwp"] Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.779856 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-7r7bc_d25a3525-d30d-4e16-b446-dace1e7987a0/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.892422 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-8fbb4f9c8-n8t5n_8b479413-73f2-4159-8ec6-5e23f139c53c/horizon-log/0.log" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.898748 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-utilities\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.898811 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-catalog-content\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:19 crc kubenswrapper[4970]: I0930 10:50:19.898922 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzdgh\" (UniqueName: \"kubernetes.io/projected/180cd06e-ebb0-4544-9808-7d857c06f66f-kube-api-access-rzdgh\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.001051 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-utilities\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.001309 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-catalog-content\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.001487 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzdgh\" (UniqueName: \"kubernetes.io/projected/180cd06e-ebb0-4544-9808-7d857c06f66f-kube-api-access-rzdgh\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.001643 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-utilities\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.001964 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-catalog-content\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.050813 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzdgh\" (UniqueName: \"kubernetes.io/projected/180cd06e-ebb0-4544-9808-7d857c06f66f-kube-api-access-rzdgh\") pod \"redhat-marketplace-bthwp\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.072154 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-kl5qr_2f87be4d-7eec-4133-a07f-4cbe2b88548f/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.091468 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.374394 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6cf9989bfd-qxb2j_f1a85d9a-3eae-49ff-af87-14d444dec7d6/keystone-api/0.log" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.533893 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bthwp"] Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.615261 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_fd6ccdd4-d83d-47fe-8283-b4625ad7d17f/kube-state-metrics/0.log" Sep 30 10:50:20 crc kubenswrapper[4970]: I0930 10:50:20.802111 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-jgcst_109a756f-75b7-4ce1-a45f-3363d2d4097e/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.101312 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-65fc8b84cc-9lm9w_7244a0da-0989-4ba5-be03-aab3ab0fadce/neutron-api/0.log" Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.153262 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-65fc8b84cc-9lm9w_7244a0da-0989-4ba5-be03-aab3ab0fadce/neutron-httpd/0.log" Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.272580 4970 generic.go:334] "Generic (PLEG): container finished" podID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerID="95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6" exitCode=0 Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.272630 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bthwp" event={"ID":"180cd06e-ebb0-4544-9808-7d857c06f66f","Type":"ContainerDied","Data":"95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6"} Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.272684 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bthwp" event={"ID":"180cd06e-ebb0-4544-9808-7d857c06f66f","Type":"ContainerStarted","Data":"e5d3c250f4322549dff6c397bb327477612fe72d9f5a116a1121df434fdd25d2"} Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.343068 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-vd6vm_be128dee-e0c7-4db4-a760-b03c3b8d263d/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:21 crc kubenswrapper[4970]: I0930 10:50:21.943213 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b1eaeb05-1ae9-4640-bc87-da6567c4f1a1/nova-api-log/0.log" Sep 30 10:50:22 crc kubenswrapper[4970]: I0930 10:50:22.064768 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_66fda279-0629-46e9-8f55-145febd6facd/nova-cell0-conductor-conductor/0.log" Sep 30 10:50:22 crc kubenswrapper[4970]: I0930 10:50:22.206414 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b1eaeb05-1ae9-4640-bc87-da6567c4f1a1/nova-api-api/0.log" Sep 30 10:50:22 crc kubenswrapper[4970]: I0930 10:50:22.402981 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3e334a93-12b2-402f-97e7-d5f77c7cb8bc/nova-cell1-conductor-conductor/0.log" Sep 30 10:50:22 crc kubenswrapper[4970]: I0930 10:50:22.583954 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_e81a0f38-b543-4aa5-aef9-fc02f91800e5/nova-cell1-novncproxy-novncproxy/0.log" Sep 30 10:50:22 crc kubenswrapper[4970]: I0930 10:50:22.674368 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-428p6_4f461d08-f275-49fd-be5d-3f4198d81343/nova-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:22 crc kubenswrapper[4970]: I0930 10:50:22.958883 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fec6d022-057f-4f80-9da1-25c1f4e1544e/nova-metadata-log/0.log" Sep 30 10:50:23 crc kubenswrapper[4970]: I0930 10:50:23.293389 4970 generic.go:334] "Generic (PLEG): container finished" podID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerID="c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a" exitCode=0 Sep 30 10:50:23 crc kubenswrapper[4970]: I0930 10:50:23.293438 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bthwp" event={"ID":"180cd06e-ebb0-4544-9808-7d857c06f66f","Type":"ContainerDied","Data":"c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a"} Sep 30 10:50:23 crc kubenswrapper[4970]: I0930 10:50:23.460773 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_c599b731-6bc5-4882-9f48-0abfa125f843/nova-scheduler-scheduler/0.log" Sep 30 10:50:23 crc kubenswrapper[4970]: I0930 10:50:23.619122 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d8ffcf-dc53-4fac-92a0-64136b4b0d4b/mysql-bootstrap/0.log" Sep 30 10:50:23 crc kubenswrapper[4970]: I0930 10:50:23.866437 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d8ffcf-dc53-4fac-92a0-64136b4b0d4b/mysql-bootstrap/0.log" Sep 30 10:50:23 crc kubenswrapper[4970]: I0930 10:50:23.902415 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_60d8ffcf-dc53-4fac-92a0-64136b4b0d4b/galera/0.log" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.304765 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bthwp" event={"ID":"180cd06e-ebb0-4544-9808-7d857c06f66f","Type":"ContainerStarted","Data":"92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed"} Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.306567 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dee5bc19-bb45-4962-adaf-ff6561817272/mysql-bootstrap/0.log" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.326364 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bthwp" podStartSLOduration=2.829825823 podStartE2EDuration="5.326344187s" podCreationTimestamp="2025-09-30 10:50:19 +0000 UTC" firstStartedPulling="2025-09-30 10:50:21.275272366 +0000 UTC m=+3834.347123300" lastFinishedPulling="2025-09-30 10:50:23.77179073 +0000 UTC m=+3836.843641664" observedRunningTime="2025-09-30 10:50:24.325766651 +0000 UTC m=+3837.397617575" watchObservedRunningTime="2025-09-30 10:50:24.326344187 +0000 UTC m=+3837.398195121" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.346791 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_fec6d022-057f-4f80-9da1-25c1f4e1544e/nova-metadata-metadata/0.log" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.563425 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dee5bc19-bb45-4962-adaf-ff6561817272/galera/0.log" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.567206 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_dee5bc19-bb45-4962-adaf-ff6561817272/mysql-bootstrap/0.log" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.839017 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_8d91e0e8-ee07-493a-bb4d-6949ce548047/openstackclient/0.log" Sep 30 10:50:24 crc kubenswrapper[4970]: I0930 10:50:24.902261 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-6v2k7_20c3b444-9843-4584-81cb-9e5cb444c98b/openstack-network-exporter/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.115095 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovsdb-server-init/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.363114 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovsdb-server-init/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.385689 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovsdb-server/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.405853 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-h8572_0baee2b3-0d8f-4586-a636-c452b0d541d9/ovs-vswitchd/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.576299 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vtdnt_ea8f06d0-75e0-4ed8-9e37-086886b019e5/ovn-controller/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.802689 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dwx4r_450706d9-395f-417b-b37d-7ded156dce3a/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:25 crc kubenswrapper[4970]: I0930 10:50:25.874272 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_57282deb-d1f0-4e71-90e2-71c39075d208/openstack-network-exporter/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.017480 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_57282deb-d1f0-4e71-90e2-71c39075d208/ovn-northd/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.110979 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_23d91298-5a5e-428e-afe3-f5625b74f3e0/openstack-network-exporter/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.218229 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_23d91298-5a5e-428e-afe3-f5625b74f3e0/ovsdbserver-nb/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.280511 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6fb84c2f-32c7-4ac2-b7aa-343846c86bfa/openstack-network-exporter/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.426189 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6fb84c2f-32c7-4ac2-b7aa-343846c86bfa/ovsdbserver-sb/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.590644 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cc9569d-ll5d9_16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890/placement-api/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.720349 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-8cc9569d-ll5d9_16ff53ae-44dc-4cc0-bc1f-9ade2d6f8890/placement-log/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.775886 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7690d385-7ca5-472c-8ee7-5c3aa4030951/setup-container/0.log" Sep 30 10:50:26 crc kubenswrapper[4970]: I0930 10:50:26.994545 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7690d385-7ca5-472c-8ee7-5c3aa4030951/setup-container/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.024134 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7690d385-7ca5-472c-8ee7-5c3aa4030951/rabbitmq/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.193204 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4582e248-17bf-40bb-9072-f64d72d1fc82/setup-container/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.388146 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4582e248-17bf-40bb-9072-f64d72d1fc82/setup-container/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.469702 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_4582e248-17bf-40bb-9072-f64d72d1fc82/rabbitmq/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.631445 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-l2bmj_0cf74a13-4f04-472b-af17-1c856152950f/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.716621 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-bbkpz_a934a1c0-31ef-4341-a85f-a13cd865adc1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:27 crc kubenswrapper[4970]: I0930 10:50:27.850089 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-p22cr_ebfb47a0-7f15-4839-b84d-7aef631222f8/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.014072 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-vg96h_74d48203-8780-4ee2-8db2-39388705bab0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.181290 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-fccfg_2d7598c9-6363-43e1-8913-1b7707fb57eb/ssh-known-hosts-edpm-deployment/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.426705 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-597dc56955-zfx9s_1ade2be1-8027-4a99-ae4d-f0394e4d9c1d/proxy-server/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.568264 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ctbn9_9a77b5ae-8010-4d5d-8cec-ae87b4fba9d5/swift-ring-rebalance/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.808513 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-597dc56955-zfx9s_1ade2be1-8027-4a99-ae4d-f0394e4d9c1d/proxy-httpd/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.814434 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-auditor/0.log" Sep 30 10:50:28 crc kubenswrapper[4970]: I0930 10:50:28.982306 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-reaper/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.024925 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-replicator/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.036554 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/account-server/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.178878 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-auditor/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.208405 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-server/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.268813 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-replicator/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.364984 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/container-updater/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.447672 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-auditor/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.461652 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-expirer/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.569297 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-replicator/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.626823 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-server/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.693573 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/object-updater/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.753509 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/rsync/0.log" Sep 30 10:50:29 crc kubenswrapper[4970]: I0930 10:50:29.810913 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_420e577e-2e62-4d35-b9c7-f354dd81add8/swift-recon-cron/0.log" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.027543 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-qfbt6_54899213-55ca-42b6-8838-e42c962341b6/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.091596 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.091640 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.132601 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.168108 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_b39c8562-3dd8-439a-b17d-967859c86ec2/tempest-tests-tempest-tests-runner/0.log" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.314811 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_7bdc6121-410f-4041-b69f-98368027c449/test-operator-logs-container/0.log" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.411644 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.457531 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bthwp"] Sep 30 10:50:30 crc kubenswrapper[4970]: I0930 10:50:30.557871 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-nf84v_65b6fe1d-bb74-4809-9322-f4bbb9f1f1c5/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.396177 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bthwp" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="registry-server" containerID="cri-o://92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed" gracePeriod=2 Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.881772 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.971528 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-utilities\") pod \"180cd06e-ebb0-4544-9808-7d857c06f66f\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.971616 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzdgh\" (UniqueName: \"kubernetes.io/projected/180cd06e-ebb0-4544-9808-7d857c06f66f-kube-api-access-rzdgh\") pod \"180cd06e-ebb0-4544-9808-7d857c06f66f\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.971700 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-catalog-content\") pod \"180cd06e-ebb0-4544-9808-7d857c06f66f\" (UID: \"180cd06e-ebb0-4544-9808-7d857c06f66f\") " Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.974791 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-utilities" (OuterVolumeSpecName: "utilities") pod "180cd06e-ebb0-4544-9808-7d857c06f66f" (UID: "180cd06e-ebb0-4544-9808-7d857c06f66f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:50:32 crc kubenswrapper[4970]: I0930 10:50:32.998106 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/180cd06e-ebb0-4544-9808-7d857c06f66f-kube-api-access-rzdgh" (OuterVolumeSpecName: "kube-api-access-rzdgh") pod "180cd06e-ebb0-4544-9808-7d857c06f66f" (UID: "180cd06e-ebb0-4544-9808-7d857c06f66f"). InnerVolumeSpecName "kube-api-access-rzdgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.012817 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "180cd06e-ebb0-4544-9808-7d857c06f66f" (UID: "180cd06e-ebb0-4544-9808-7d857c06f66f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.075075 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.075098 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzdgh\" (UniqueName: \"kubernetes.io/projected/180cd06e-ebb0-4544-9808-7d857c06f66f-kube-api-access-rzdgh\") on node \"crc\" DevicePath \"\"" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.075108 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/180cd06e-ebb0-4544-9808-7d857c06f66f-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.407634 4970 generic.go:334] "Generic (PLEG): container finished" podID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerID="92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed" exitCode=0 Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.407685 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bthwp" event={"ID":"180cd06e-ebb0-4544-9808-7d857c06f66f","Type":"ContainerDied","Data":"92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed"} Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.407717 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bthwp" event={"ID":"180cd06e-ebb0-4544-9808-7d857c06f66f","Type":"ContainerDied","Data":"e5d3c250f4322549dff6c397bb327477612fe72d9f5a116a1121df434fdd25d2"} Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.407741 4970 scope.go:117] "RemoveContainer" containerID="92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.407908 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bthwp" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.439368 4970 scope.go:117] "RemoveContainer" containerID="c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.469226 4970 scope.go:117] "RemoveContainer" containerID="95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.469371 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bthwp"] Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.476087 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bthwp"] Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.533784 4970 scope.go:117] "RemoveContainer" containerID="92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed" Sep 30 10:50:33 crc kubenswrapper[4970]: E0930 10:50:33.539168 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed\": container with ID starting with 92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed not found: ID does not exist" containerID="92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.539267 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed"} err="failed to get container status \"92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed\": rpc error: code = NotFound desc = could not find container \"92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed\": container with ID starting with 92db85b8d04a4670c27280a06f653b2b4919b656be99ee0cc3b97b61a3b3daed not found: ID does not exist" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.539352 4970 scope.go:117] "RemoveContainer" containerID="c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a" Sep 30 10:50:33 crc kubenswrapper[4970]: E0930 10:50:33.539915 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a\": container with ID starting with c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a not found: ID does not exist" containerID="c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.539957 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a"} err="failed to get container status \"c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a\": rpc error: code = NotFound desc = could not find container \"c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a\": container with ID starting with c011770732586db7f82616d7ce82bf390748e204eba28442e3fb9122fb39a13a not found: ID does not exist" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.540015 4970 scope.go:117] "RemoveContainer" containerID="95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6" Sep 30 10:50:33 crc kubenswrapper[4970]: E0930 10:50:33.540294 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6\": container with ID starting with 95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6 not found: ID does not exist" containerID="95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.540323 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6"} err="failed to get container status \"95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6\": rpc error: code = NotFound desc = could not find container \"95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6\": container with ID starting with 95ca46f881eadba5374a5c44455188e4e57e778e3a2046195e2d746b214c84d6 not found: ID does not exist" Sep 30 10:50:33 crc kubenswrapper[4970]: I0930 10:50:33.678335 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" path="/var/lib/kubelet/pods/180cd06e-ebb0-4544-9808-7d857c06f66f/volumes" Sep 30 10:50:34 crc kubenswrapper[4970]: I0930 10:50:34.821592 4970 patch_prober.go:28] interesting pod/machine-config-daemon-gcphg container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Sep 30 10:50:34 crc kubenswrapper[4970]: I0930 10:50:34.821955 4970 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Sep 30 10:50:34 crc kubenswrapper[4970]: I0930 10:50:34.822027 4970 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" Sep 30 10:50:34 crc kubenswrapper[4970]: I0930 10:50:34.822851 4970 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2"} pod="openshift-machine-config-operator/machine-config-daemon-gcphg" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Sep 30 10:50:34 crc kubenswrapper[4970]: I0930 10:50:34.822922 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" containerName="machine-config-daemon" containerID="cri-o://6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" gracePeriod=600 Sep 30 10:50:34 crc kubenswrapper[4970]: E0930 10:50:34.948730 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:50:35 crc kubenswrapper[4970]: I0930 10:50:35.433204 4970 generic.go:334] "Generic (PLEG): container finished" podID="92198682-93fe-4b8a-8b03-bb768b56a129" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" exitCode=0 Sep 30 10:50:35 crc kubenswrapper[4970]: I0930 10:50:35.433246 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerDied","Data":"6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2"} Sep 30 10:50:35 crc kubenswrapper[4970]: I0930 10:50:35.433526 4970 scope.go:117] "RemoveContainer" containerID="09447c97669fe66df87c231f3f61ffa87f4993dc0b0714ceaa7aee2d0c6465bc" Sep 30 10:50:35 crc kubenswrapper[4970]: I0930 10:50:35.434160 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:50:35 crc kubenswrapper[4970]: E0930 10:50:35.434426 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:50:37 crc kubenswrapper[4970]: I0930 10:50:37.481242 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d524179d-ea87-48d3-b87f-da18d0a059c8/memcached/0.log" Sep 30 10:50:47 crc kubenswrapper[4970]: I0930 10:50:47.687846 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:50:47 crc kubenswrapper[4970]: E0930 10:50:47.688896 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:50:58 crc kubenswrapper[4970]: I0930 10:50:58.668860 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:50:58 crc kubenswrapper[4970]: E0930 10:50:58.670310 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:51:03 crc kubenswrapper[4970]: I0930 10:51:03.035665 4970 scope.go:117] "RemoveContainer" containerID="dc144096e06c16bfbd4f1871e07a19b274d4005880dc0f229cb4b754c8a708c9" Sep 30 10:51:08 crc kubenswrapper[4970]: I0930 10:51:08.755946 4970 generic.go:334] "Generic (PLEG): container finished" podID="96ae6ec6-de12-4067-992f-800d13e06611" containerID="658f37273f47ac14415581b069e398db7b06b6026ce5cf52d4923ebfc8c1663c" exitCode=0 Sep 30 10:51:08 crc kubenswrapper[4970]: I0930 10:51:08.756050 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" event={"ID":"96ae6ec6-de12-4067-992f-800d13e06611","Type":"ContainerDied","Data":"658f37273f47ac14415581b069e398db7b06b6026ce5cf52d4923ebfc8c1663c"} Sep 30 10:51:09 crc kubenswrapper[4970]: I0930 10:51:09.890658 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:51:09 crc kubenswrapper[4970]: I0930 10:51:09.930024 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-f78f6"] Sep 30 10:51:09 crc kubenswrapper[4970]: I0930 10:51:09.941986 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-f78f6"] Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.051701 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqbnh\" (UniqueName: \"kubernetes.io/projected/96ae6ec6-de12-4067-992f-800d13e06611-kube-api-access-kqbnh\") pod \"96ae6ec6-de12-4067-992f-800d13e06611\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.051902 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96ae6ec6-de12-4067-992f-800d13e06611-host\") pod \"96ae6ec6-de12-4067-992f-800d13e06611\" (UID: \"96ae6ec6-de12-4067-992f-800d13e06611\") " Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.052448 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/96ae6ec6-de12-4067-992f-800d13e06611-host" (OuterVolumeSpecName: "host") pod "96ae6ec6-de12-4067-992f-800d13e06611" (UID: "96ae6ec6-de12-4067-992f-800d13e06611"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.058589 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96ae6ec6-de12-4067-992f-800d13e06611-kube-api-access-kqbnh" (OuterVolumeSpecName: "kube-api-access-kqbnh") pod "96ae6ec6-de12-4067-992f-800d13e06611" (UID: "96ae6ec6-de12-4067-992f-800d13e06611"). InnerVolumeSpecName "kube-api-access-kqbnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.153952 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/96ae6ec6-de12-4067-992f-800d13e06611-host\") on node \"crc\" DevicePath \"\"" Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.154008 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqbnh\" (UniqueName: \"kubernetes.io/projected/96ae6ec6-de12-4067-992f-800d13e06611-kube-api-access-kqbnh\") on node \"crc\" DevicePath \"\"" Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.777369 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3c8c1a382720cb223f87f5874ad487f5ff5b7fa3ddb6c9edf3de6a9499783d6" Sep 30 10:51:10 crc kubenswrapper[4970]: I0930 10:51:10.777500 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-f78f6" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.136336 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-9gmfb"] Sep 30 10:51:11 crc kubenswrapper[4970]: E0930 10:51:11.136916 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="registry-server" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.136947 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="registry-server" Sep 30 10:51:11 crc kubenswrapper[4970]: E0930 10:51:11.137462 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="extract-utilities" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.137492 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="extract-utilities" Sep 30 10:51:11 crc kubenswrapper[4970]: E0930 10:51:11.137525 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="extract-content" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.137540 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="extract-content" Sep 30 10:51:11 crc kubenswrapper[4970]: E0930 10:51:11.137571 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96ae6ec6-de12-4067-992f-800d13e06611" containerName="container-00" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.137585 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="96ae6ec6-de12-4067-992f-800d13e06611" containerName="container-00" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.138612 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="180cd06e-ebb0-4544-9808-7d857c06f66f" containerName="registry-server" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.138667 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="96ae6ec6-de12-4067-992f-800d13e06611" containerName="container-00" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.139758 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.273508 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d33a809c-55bc-4b71-9057-9570743a3a47-host\") pod \"crc-debug-9gmfb\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.273624 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgsbp\" (UniqueName: \"kubernetes.io/projected/d33a809c-55bc-4b71-9057-9570743a3a47-kube-api-access-qgsbp\") pod \"crc-debug-9gmfb\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.375206 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgsbp\" (UniqueName: \"kubernetes.io/projected/d33a809c-55bc-4b71-9057-9570743a3a47-kube-api-access-qgsbp\") pod \"crc-debug-9gmfb\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.375341 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d33a809c-55bc-4b71-9057-9570743a3a47-host\") pod \"crc-debug-9gmfb\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.375469 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d33a809c-55bc-4b71-9057-9570743a3a47-host\") pod \"crc-debug-9gmfb\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.392530 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgsbp\" (UniqueName: \"kubernetes.io/projected/d33a809c-55bc-4b71-9057-9570743a3a47-kube-api-access-qgsbp\") pod \"crc-debug-9gmfb\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.466410 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.669363 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:51:11 crc kubenswrapper[4970]: E0930 10:51:11.670189 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.683454 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96ae6ec6-de12-4067-992f-800d13e06611" path="/var/lib/kubelet/pods/96ae6ec6-de12-4067-992f-800d13e06611/volumes" Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.793514 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" event={"ID":"d33a809c-55bc-4b71-9057-9570743a3a47","Type":"ContainerStarted","Data":"fd4b60af9e8f206a1952a3bf436ed596accc79bc50a03652bd8f2b7788cd5bf1"} Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.793819 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" event={"ID":"d33a809c-55bc-4b71-9057-9570743a3a47","Type":"ContainerStarted","Data":"f1ebe8160755003ec935899e87383733afbaa586f5f8fcda517e1ea02991d24d"} Sep 30 10:51:11 crc kubenswrapper[4970]: I0930 10:51:11.824778 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" podStartSLOduration=0.824759536 podStartE2EDuration="824.759536ms" podCreationTimestamp="2025-09-30 10:51:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-30 10:51:11.813784494 +0000 UTC m=+3884.885635468" watchObservedRunningTime="2025-09-30 10:51:11.824759536 +0000 UTC m=+3884.896610480" Sep 30 10:51:12 crc kubenswrapper[4970]: I0930 10:51:12.805589 4970 generic.go:334] "Generic (PLEG): container finished" podID="d33a809c-55bc-4b71-9057-9570743a3a47" containerID="fd4b60af9e8f206a1952a3bf436ed596accc79bc50a03652bd8f2b7788cd5bf1" exitCode=0 Sep 30 10:51:12 crc kubenswrapper[4970]: I0930 10:51:12.805629 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" event={"ID":"d33a809c-55bc-4b71-9057-9570743a3a47","Type":"ContainerDied","Data":"fd4b60af9e8f206a1952a3bf436ed596accc79bc50a03652bd8f2b7788cd5bf1"} Sep 30 10:51:13 crc kubenswrapper[4970]: I0930 10:51:13.937372 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.113854 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d33a809c-55bc-4b71-9057-9570743a3a47-host\") pod \"d33a809c-55bc-4b71-9057-9570743a3a47\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.113904 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d33a809c-55bc-4b71-9057-9570743a3a47-host" (OuterVolumeSpecName: "host") pod "d33a809c-55bc-4b71-9057-9570743a3a47" (UID: "d33a809c-55bc-4b71-9057-9570743a3a47"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.114095 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgsbp\" (UniqueName: \"kubernetes.io/projected/d33a809c-55bc-4b71-9057-9570743a3a47-kube-api-access-qgsbp\") pod \"d33a809c-55bc-4b71-9057-9570743a3a47\" (UID: \"d33a809c-55bc-4b71-9057-9570743a3a47\") " Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.114467 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d33a809c-55bc-4b71-9057-9570743a3a47-host\") on node \"crc\" DevicePath \"\"" Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.122627 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d33a809c-55bc-4b71-9057-9570743a3a47-kube-api-access-qgsbp" (OuterVolumeSpecName: "kube-api-access-qgsbp") pod "d33a809c-55bc-4b71-9057-9570743a3a47" (UID: "d33a809c-55bc-4b71-9057-9570743a3a47"). InnerVolumeSpecName "kube-api-access-qgsbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.215545 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgsbp\" (UniqueName: \"kubernetes.io/projected/d33a809c-55bc-4b71-9057-9570743a3a47-kube-api-access-qgsbp\") on node \"crc\" DevicePath \"\"" Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.825458 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" event={"ID":"d33a809c-55bc-4b71-9057-9570743a3a47","Type":"ContainerDied","Data":"f1ebe8160755003ec935899e87383733afbaa586f5f8fcda517e1ea02991d24d"} Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.825727 4970 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ebe8160755003ec935899e87383733afbaa586f5f8fcda517e1ea02991d24d" Sep 30 10:51:14 crc kubenswrapper[4970]: I0930 10:51:14.825560 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-9gmfb" Sep 30 10:51:18 crc kubenswrapper[4970]: I0930 10:51:18.542384 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-9gmfb"] Sep 30 10:51:18 crc kubenswrapper[4970]: I0930 10:51:18.549690 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-9gmfb"] Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.677948 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d33a809c-55bc-4b71-9057-9570743a3a47" path="/var/lib/kubelet/pods/d33a809c-55bc-4b71-9057-9570743a3a47/volumes" Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.725980 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-pcg7z"] Sep 30 10:51:19 crc kubenswrapper[4970]: E0930 10:51:19.726347 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d33a809c-55bc-4b71-9057-9570743a3a47" containerName="container-00" Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.726364 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="d33a809c-55bc-4b71-9057-9570743a3a47" containerName="container-00" Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.726552 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="d33a809c-55bc-4b71-9057-9570743a3a47" containerName="container-00" Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.727164 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.898967 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-host\") pod \"crc-debug-pcg7z\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:19 crc kubenswrapper[4970]: I0930 10:51:19.899055 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbqcf\" (UniqueName: \"kubernetes.io/projected/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-kube-api-access-sbqcf\") pod \"crc-debug-pcg7z\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.009053 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-host\") pod \"crc-debug-pcg7z\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.009122 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbqcf\" (UniqueName: \"kubernetes.io/projected/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-kube-api-access-sbqcf\") pod \"crc-debug-pcg7z\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.009560 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-host\") pod \"crc-debug-pcg7z\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.067725 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbqcf\" (UniqueName: \"kubernetes.io/projected/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-kube-api-access-sbqcf\") pod \"crc-debug-pcg7z\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.349060 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:20 crc kubenswrapper[4970]: W0930 10:51:20.380195 4970 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8878e1d3_7d98_4d3b_bea3_1b52af7e1aa8.slice/crio-8edf95bac01f73b1096e10b233466bbb666e0767b9ca8af4623f711741eae15d WatchSource:0}: Error finding container 8edf95bac01f73b1096e10b233466bbb666e0767b9ca8af4623f711741eae15d: Status 404 returned error can't find the container with id 8edf95bac01f73b1096e10b233466bbb666e0767b9ca8af4623f711741eae15d Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.888204 4970 generic.go:334] "Generic (PLEG): container finished" podID="8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" containerID="8138c6821de33ae36854d0f7de8254e7f5041968cdfbfcb06503e2828811f3fe" exitCode=0 Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.888279 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" event={"ID":"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8","Type":"ContainerDied","Data":"8138c6821de33ae36854d0f7de8254e7f5041968cdfbfcb06503e2828811f3fe"} Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.888322 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" event={"ID":"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8","Type":"ContainerStarted","Data":"8edf95bac01f73b1096e10b233466bbb666e0767b9ca8af4623f711741eae15d"} Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.936517 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-pcg7z"] Sep 30 10:51:20 crc kubenswrapper[4970]: I0930 10:51:20.947141 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk4t7/crc-debug-pcg7z"] Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.004361 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.151598 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-host\") pod \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.152046 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbqcf\" (UniqueName: \"kubernetes.io/projected/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-kube-api-access-sbqcf\") pod \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\" (UID: \"8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8\") " Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.152355 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-host" (OuterVolumeSpecName: "host") pod "8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" (UID: "8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.152546 4970 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-host\") on node \"crc\" DevicePath \"\"" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.160317 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-kube-api-access-sbqcf" (OuterVolumeSpecName: "kube-api-access-sbqcf") pod "8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" (UID: "8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8"). InnerVolumeSpecName "kube-api-access-sbqcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.253859 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbqcf\" (UniqueName: \"kubernetes.io/projected/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8-kube-api-access-sbqcf\") on node \"crc\" DevicePath \"\"" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.489373 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/util/0.log" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.736032 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/pull/0.log" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.751198 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/util/0.log" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.778546 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/pull/0.log" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.905478 4970 scope.go:117] "RemoveContainer" containerID="8138c6821de33ae36854d0f7de8254e7f5041968cdfbfcb06503e2828811f3fe" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.905524 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/crc-debug-pcg7z" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.973312 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/pull/0.log" Sep 30 10:51:22 crc kubenswrapper[4970]: I0930 10:51:22.988709 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/util/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.018351 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_4b1acbb1d07288a8e0645212f6a7574767b801728767a18fb4e5a392f7zpwwc_ae66629d-a629-4053-89ac-0b2bc9fc9407/extract/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.141073 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-fdjgr_d288c95d-759c-4b29-8be6-304869f99ae7/kube-rbac-proxy/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.232923 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-fdjgr_d288c95d-759c-4b29-8be6-304869f99ae7/manager/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.263940 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-ws6gj_7131ae21-9827-4028-9841-fbc480e7b938/kube-rbac-proxy/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.315960 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-644bddb6d8-ws6gj_7131ae21-9827-4028-9841-fbc480e7b938/manager/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.450881 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-pf2ph_c9a40f4a-1de7-45da-91e9-4f11637452b2/kube-rbac-proxy/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.463455 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-pf2ph_c9a40f4a-1de7-45da-91e9-4f11637452b2/manager/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.637089 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-q2llj_b611fd3e-a529-4c90-8e81-c7352004d62f/kube-rbac-proxy/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.677356 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" path="/var/lib/kubelet/pods/8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8/volumes" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.730710 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-q2llj_b611fd3e-a529-4c90-8e81-c7352004d62f/manager/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.788836 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-ckjvw_908cf55d-1ac7-4814-9f4e-ddb57acb1b76/kube-rbac-proxy/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.838782 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-ckjvw_908cf55d-1ac7-4814-9f4e-ddb57acb1b76/manager/0.log" Sep 30 10:51:23 crc kubenswrapper[4970]: I0930 10:51:23.876826 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-7bcpp_0dab040d-a74a-48f1-b2e5-fb2fe6de3b58/kube-rbac-proxy/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.154245 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-7bcpp_0dab040d-a74a-48f1-b2e5-fb2fe6de3b58/manager/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.264448 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-svx8h_b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7/kube-rbac-proxy/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.367975 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7d857cc749-svx8h_b833fa3c-cfc2-4a0b-920c-7a8e32a7edf7/manager/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.453774 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-js7xj_cefaa649-872b-43be-9763-85ee950bb5d6/kube-rbac-proxy/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.459795 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-7975b88857-js7xj_cefaa649-872b-43be-9763-85ee950bb5d6/manager/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.593807 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-vjpqd_9d9bdcb3-a944-4379-8dfd-858a022e946a/kube-rbac-proxy/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.676354 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-vjpqd_9d9bdcb3-a944-4379-8dfd-858a022e946a/manager/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.779017 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-rjk8q_ae58a1aa-0503-4387-91cf-fc6f396a180f/manager/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.785427 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-rjk8q_ae58a1aa-0503-4387-91cf-fc6f396a180f/kube-rbac-proxy/0.log" Sep 30 10:51:24 crc kubenswrapper[4970]: I0930 10:51:24.851127 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-vwkw2_1b1a92f2-46aa-492c-906b-1b86c58ba818/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.005242 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-vwkw2_1b1a92f2-46aa-492c-906b-1b86c58ba818/manager/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.038486 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-8m95z_0283cb68-98f4-4dcf-99c0-55ebc251dc19/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.085588 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-64d7b59854-8m95z_0283cb68-98f4-4dcf-99c0-55ebc251dc19/manager/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.202512 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-74r7d_815b1df3-7d86-407a-a793-baec392c0f76/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.278496 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-c7c776c96-74r7d_815b1df3-7d86-407a-a793-baec392c0f76/manager/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.368402 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-p9fxz_b4a0b16f-5d81-4236-850f-03f628bb3595/manager/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.399409 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-76fcc6dc7c-p9fxz_b4a0b16f-5d81-4236-850f-03f628bb3595/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.492785 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-gnxmq_40f541c2-3a4e-48ec-a01f-a3d395202085/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.526144 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6d776955-gnxmq_40f541c2-3a4e-48ec-a01f-a3d395202085/manager/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.583794 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d64b45c9c-7q8rq_b33f5230-0a43-418a-a25c-690de07ddc21/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.668190 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:51:25 crc kubenswrapper[4970]: E0930 10:51:25.668424 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.801109 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d8c4cd779-pt9l2_814b0f3a-2bb6-45ba-a6c2-f798b43d4494/kube-rbac-proxy/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.978024 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-7d8c4cd779-pt9l2_814b0f3a-2bb6-45ba-a6c2-f798b43d4494/operator/0.log" Sep 30 10:51:25 crc kubenswrapper[4970]: I0930 10:51:25.989300 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-xv9kb_8bc86bb1-b7dd-4e60-9d3a-820f8b7a99ef/registry-server/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.080357 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-ss9vs_db952a6d-9ea1-482e-aec3-7a93fcd6587c/kube-rbac-proxy/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.277437 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-s6hpz_116a4b20-5a9a-4456-8816-637e0740a792/kube-rbac-proxy/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.281630 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-ss9vs_db952a6d-9ea1-482e-aec3-7a93fcd6587c/manager/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.372861 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-s6hpz_116a4b20-5a9a-4456-8816-637e0740a792/manager/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.505838 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-79d8469568-kpw26_7f744173-6696-4797-a55c-85b498bff4da/operator/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.557302 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-v5qrd_eea4d20f-1d77-4e9b-bbc3-644ff1a5a314/kube-rbac-proxy/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.674504 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-5d64b45c9c-7q8rq_b33f5230-0a43-418a-a25c-690de07ddc21/manager/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.675698 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-bc7dc7bd9-v5qrd_eea4d20f-1d77-4e9b-bbc3-644ff1a5a314/manager/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.716165 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-bdcpn_5cfa1456-1b45-4385-8fc5-27dccef45958/kube-rbac-proxy/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.817365 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-bdcpn_5cfa1456-1b45-4385-8fc5-27dccef45958/manager/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.846742 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-sfggm_7f9f19d7-d284-4757-94a1-1a86a8f28b17/kube-rbac-proxy/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.970401 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-f66b554c6-sfggm_7f9f19d7-d284-4757-94a1-1a86a8f28b17/manager/0.log" Sep 30 10:51:26 crc kubenswrapper[4970]: I0930 10:51:26.991033 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-gtp7j_527884ff-dc23-4a9d-8911-aedf784b5eb1/kube-rbac-proxy/0.log" Sep 30 10:51:27 crc kubenswrapper[4970]: I0930 10:51:27.037208 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-76669f99c-gtp7j_527884ff-dc23-4a9d-8911-aedf784b5eb1/manager/0.log" Sep 30 10:51:39 crc kubenswrapper[4970]: I0930 10:51:39.668709 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:51:39 crc kubenswrapper[4970]: E0930 10:51:39.669615 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:51:43 crc kubenswrapper[4970]: I0930 10:51:43.479406 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-65v4s_e0b59dab-c4d7-4baa-9811-f29d7b19be0b/control-plane-machine-set-operator/0.log" Sep 30 10:51:43 crc kubenswrapper[4970]: I0930 10:51:43.803797 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b78lb_f2267d30-75c6-4002-ae56-b623dc6d7e42/kube-rbac-proxy/0.log" Sep 30 10:51:43 crc kubenswrapper[4970]: I0930 10:51:43.845725 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-b78lb_f2267d30-75c6-4002-ae56-b623dc6d7e42/machine-api-operator/0.log" Sep 30 10:51:51 crc kubenswrapper[4970]: I0930 10:51:51.669572 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:51:51 crc kubenswrapper[4970]: E0930 10:51:51.670462 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:51:55 crc kubenswrapper[4970]: I0930 10:51:55.232250 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-lzk5z_5d792ad1-1442-40dc-a7d1-df5284e06e35/cert-manager-controller/0.log" Sep 30 10:51:55 crc kubenswrapper[4970]: I0930 10:51:55.405858 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-8zhcv_56eac2ba-1797-44ac-9f39-83f71a6f689d/cert-manager-cainjector/0.log" Sep 30 10:51:55 crc kubenswrapper[4970]: I0930 10:51:55.432564 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-l8qdh_ee427339-b272-4768-bb9d-27fb3e8eab0e/cert-manager-webhook/0.log" Sep 30 10:52:05 crc kubenswrapper[4970]: I0930 10:52:05.670272 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:52:05 crc kubenswrapper[4970]: E0930 10:52:05.671037 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:52:06 crc kubenswrapper[4970]: I0930 10:52:06.044838 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-864bb6dfb5-z745b_1b05f65e-1145-40c4-a5cb-e07766072045/nmstate-console-plugin/0.log" Sep 30 10:52:06 crc kubenswrapper[4970]: I0930 10:52:06.219896 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-92md2_cf79d047-21bc-461c-a5c7-7c12104fbf35/nmstate-handler/0.log" Sep 30 10:52:06 crc kubenswrapper[4970]: I0930 10:52:06.290696 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-xtzzj_42b7f1da-5493-4471-980a-a87efdd8eda2/kube-rbac-proxy/0.log" Sep 30 10:52:06 crc kubenswrapper[4970]: I0930 10:52:06.365092 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-58fcddf996-xtzzj_42b7f1da-5493-4471-980a-a87efdd8eda2/nmstate-metrics/0.log" Sep 30 10:52:06 crc kubenswrapper[4970]: I0930 10:52:06.424399 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5d6f6cfd66-lfwqt_f65ea665-ce1c-4197-ae02-5810c62f1355/nmstate-operator/0.log" Sep 30 10:52:06 crc kubenswrapper[4970]: I0930 10:52:06.525290 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6d689559c5-ztvnw_15cbe10c-fb64-4630-bd5b-fd50c2c07d64/nmstate-webhook/0.log" Sep 30 10:52:19 crc kubenswrapper[4970]: I0930 10:52:19.668517 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:52:19 crc kubenswrapper[4970]: E0930 10:52:19.669148 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.091184 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-vzmvh_0c72cc58-2ee8-414b-a656-a2623e1664f0/kube-rbac-proxy/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.195527 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5d688f5ffc-vzmvh_0c72cc58-2ee8-414b-a656-a2623e1664f0/controller/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.296105 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-5478bdb765-p7rc4_7c5f78f9-5ebd-434d-82c0-df6af4bc483b/frr-k8s-webhook-server/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.365590 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.587072 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.597337 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.618075 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.631079 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.760799 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.764198 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.789949 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:52:21 crc kubenswrapper[4970]: I0930 10:52:21.808278 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.022009 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-frr-files/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.029333 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-metrics/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.061377 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/cp-reloader/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.073973 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/controller/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.250578 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/frr-metrics/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.261758 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/kube-rbac-proxy/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.288434 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/kube-rbac-proxy-frr/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.483552 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/reloader/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.561141 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-5689865b7f-lzf5z_44474490-8653-4ad2-8ae3-d4e089664fb8/manager/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.702677 4970 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wczwp"] Sep 30 10:52:22 crc kubenswrapper[4970]: E0930 10:52:22.703192 4970 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" containerName="container-00" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.703220 4970 state_mem.go:107] "Deleted CPUSet assignment" podUID="8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" containerName="container-00" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.703501 4970 memory_manager.go:354] "RemoveStaleState removing state" podUID="8878e1d3-7d98-4d3b-bea3-1b52af7e1aa8" containerName="container-00" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.705075 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.727003 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79d5d6bd79-dmktk_0365d978-934a-4079-98be-d612928d9496/webhook-server/0.log" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.730190 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wczwp"] Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.784131 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzj5\" (UniqueName: \"kubernetes.io/projected/a4ac000d-29af-4ff5-a861-c89cd78802b9-kube-api-access-mfzj5\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.784488 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-catalog-content\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.784540 4970 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-utilities\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.885836 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzj5\" (UniqueName: \"kubernetes.io/projected/a4ac000d-29af-4ff5-a861-c89cd78802b9-kube-api-access-mfzj5\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.885979 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-catalog-content\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.886069 4970 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-utilities\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.886592 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-utilities\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.887179 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-catalog-content\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:22 crc kubenswrapper[4970]: I0930 10:52:22.910262 4970 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzj5\" (UniqueName: \"kubernetes.io/projected/a4ac000d-29af-4ff5-a861-c89cd78802b9-kube-api-access-mfzj5\") pod \"community-operators-wczwp\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:23 crc kubenswrapper[4970]: I0930 10:52:23.005296 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f6gvx_fef9dca8-f780-4d0b-b7b8-68cd4f13de1a/kube-rbac-proxy/0.log" Sep 30 10:52:23 crc kubenswrapper[4970]: I0930 10:52:23.035846 4970 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:23 crc kubenswrapper[4970]: I0930 10:52:23.428011 4970 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wczwp"] Sep 30 10:52:23 crc kubenswrapper[4970]: I0930 10:52:23.467825 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wczwp" event={"ID":"a4ac000d-29af-4ff5-a861-c89cd78802b9","Type":"ContainerStarted","Data":"ed8743426a77f87fe8bd3a981cd38390ffad8e6c858828740bbf0595d3a456d1"} Sep 30 10:52:23 crc kubenswrapper[4970]: I0930 10:52:23.757788 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-z8ds7_2c47cc30-570a-4a61-8025-f1f12067fa0b/frr/0.log" Sep 30 10:52:23 crc kubenswrapper[4970]: I0930 10:52:23.812951 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-f6gvx_fef9dca8-f780-4d0b-b7b8-68cd4f13de1a/speaker/0.log" Sep 30 10:52:24 crc kubenswrapper[4970]: I0930 10:52:24.477147 4970 generic.go:334] "Generic (PLEG): container finished" podID="a4ac000d-29af-4ff5-a861-c89cd78802b9" containerID="ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1" exitCode=0 Sep 30 10:52:24 crc kubenswrapper[4970]: I0930 10:52:24.477189 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wczwp" event={"ID":"a4ac000d-29af-4ff5-a861-c89cd78802b9","Type":"ContainerDied","Data":"ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1"} Sep 30 10:52:24 crc kubenswrapper[4970]: I0930 10:52:24.481824 4970 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Sep 30 10:52:25 crc kubenswrapper[4970]: I0930 10:52:25.486701 4970 generic.go:334] "Generic (PLEG): container finished" podID="a4ac000d-29af-4ff5-a861-c89cd78802b9" containerID="ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642" exitCode=0 Sep 30 10:52:25 crc kubenswrapper[4970]: I0930 10:52:25.486924 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wczwp" event={"ID":"a4ac000d-29af-4ff5-a861-c89cd78802b9","Type":"ContainerDied","Data":"ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642"} Sep 30 10:52:27 crc kubenswrapper[4970]: I0930 10:52:27.508868 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wczwp" event={"ID":"a4ac000d-29af-4ff5-a861-c89cd78802b9","Type":"ContainerStarted","Data":"7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066"} Sep 30 10:52:27 crc kubenswrapper[4970]: I0930 10:52:27.541828 4970 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wczwp" podStartSLOduration=4.168073438 podStartE2EDuration="5.541808119s" podCreationTimestamp="2025-09-30 10:52:22 +0000 UTC" firstStartedPulling="2025-09-30 10:52:24.481387553 +0000 UTC m=+3957.553238497" lastFinishedPulling="2025-09-30 10:52:25.855122244 +0000 UTC m=+3958.926973178" observedRunningTime="2025-09-30 10:52:27.535022742 +0000 UTC m=+3960.606873706" watchObservedRunningTime="2025-09-30 10:52:27.541808119 +0000 UTC m=+3960.613659073" Sep 30 10:52:33 crc kubenswrapper[4970]: I0930 10:52:33.036626 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:33 crc kubenswrapper[4970]: I0930 10:52:33.037215 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:33 crc kubenswrapper[4970]: I0930 10:52:33.146054 4970 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:33 crc kubenswrapper[4970]: I0930 10:52:33.623498 4970 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:33 crc kubenswrapper[4970]: I0930 10:52:33.683134 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wczwp"] Sep 30 10:52:34 crc kubenswrapper[4970]: I0930 10:52:34.668192 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:52:34 crc kubenswrapper[4970]: E0930 10:52:34.668812 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:52:35 crc kubenswrapper[4970]: I0930 10:52:35.573724 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wczwp" podUID="a4ac000d-29af-4ff5-a861-c89cd78802b9" containerName="registry-server" containerID="cri-o://7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066" gracePeriod=2 Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.061021 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.241294 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzj5\" (UniqueName: \"kubernetes.io/projected/a4ac000d-29af-4ff5-a861-c89cd78802b9-kube-api-access-mfzj5\") pod \"a4ac000d-29af-4ff5-a861-c89cd78802b9\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.241384 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-utilities\") pod \"a4ac000d-29af-4ff5-a861-c89cd78802b9\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.241562 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-catalog-content\") pod \"a4ac000d-29af-4ff5-a861-c89cd78802b9\" (UID: \"a4ac000d-29af-4ff5-a861-c89cd78802b9\") " Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.242335 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-utilities" (OuterVolumeSpecName: "utilities") pod "a4ac000d-29af-4ff5-a861-c89cd78802b9" (UID: "a4ac000d-29af-4ff5-a861-c89cd78802b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.247273 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4ac000d-29af-4ff5-a861-c89cd78802b9-kube-api-access-mfzj5" (OuterVolumeSpecName: "kube-api-access-mfzj5") pod "a4ac000d-29af-4ff5-a861-c89cd78802b9" (UID: "a4ac000d-29af-4ff5-a861-c89cd78802b9"). InnerVolumeSpecName "kube-api-access-mfzj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.307423 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4ac000d-29af-4ff5-a861-c89cd78802b9" (UID: "a4ac000d-29af-4ff5-a861-c89cd78802b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.343615 4970 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-catalog-content\") on node \"crc\" DevicePath \"\"" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.344781 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfzj5\" (UniqueName: \"kubernetes.io/projected/a4ac000d-29af-4ff5-a861-c89cd78802b9-kube-api-access-mfzj5\") on node \"crc\" DevicePath \"\"" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.344822 4970 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4ac000d-29af-4ff5-a861-c89cd78802b9-utilities\") on node \"crc\" DevicePath \"\"" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.586850 4970 generic.go:334] "Generic (PLEG): container finished" podID="a4ac000d-29af-4ff5-a861-c89cd78802b9" containerID="7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066" exitCode=0 Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.587128 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wczwp" event={"ID":"a4ac000d-29af-4ff5-a861-c89cd78802b9","Type":"ContainerDied","Data":"7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066"} Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.587229 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wczwp" event={"ID":"a4ac000d-29af-4ff5-a861-c89cd78802b9","Type":"ContainerDied","Data":"ed8743426a77f87fe8bd3a981cd38390ffad8e6c858828740bbf0595d3a456d1"} Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.587317 4970 scope.go:117] "RemoveContainer" containerID="7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.587512 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wczwp" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.616353 4970 scope.go:117] "RemoveContainer" containerID="ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.620960 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wczwp"] Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.654620 4970 scope.go:117] "RemoveContainer" containerID="ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.662700 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wczwp"] Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.704696 4970 scope.go:117] "RemoveContainer" containerID="7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066" Sep 30 10:52:36 crc kubenswrapper[4970]: E0930 10:52:36.705171 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066\": container with ID starting with 7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066 not found: ID does not exist" containerID="7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.705197 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066"} err="failed to get container status \"7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066\": rpc error: code = NotFound desc = could not find container \"7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066\": container with ID starting with 7f80eb14bfe134f67f24833074582affe7c0b2073534b452acbb1b72ce947066 not found: ID does not exist" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.705217 4970 scope.go:117] "RemoveContainer" containerID="ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642" Sep 30 10:52:36 crc kubenswrapper[4970]: E0930 10:52:36.705382 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642\": container with ID starting with ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642 not found: ID does not exist" containerID="ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.705398 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642"} err="failed to get container status \"ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642\": rpc error: code = NotFound desc = could not find container \"ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642\": container with ID starting with ff9b82fdd264064f4d2ba00d1c6e9b901fa21aba61bd8d2b30483ea1f89b6642 not found: ID does not exist" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.705410 4970 scope.go:117] "RemoveContainer" containerID="ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1" Sep 30 10:52:36 crc kubenswrapper[4970]: E0930 10:52:36.705568 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1\": container with ID starting with ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1 not found: ID does not exist" containerID="ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1" Sep 30 10:52:36 crc kubenswrapper[4970]: I0930 10:52:36.705584 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1"} err="failed to get container status \"ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1\": rpc error: code = NotFound desc = could not find container \"ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1\": container with ID starting with ea0e2f8cc308bee9d690b888240cb760827e8b1cecf181eff5991b7eb65211f1 not found: ID does not exist" Sep 30 10:52:37 crc kubenswrapper[4970]: I0930 10:52:37.678757 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4ac000d-29af-4ff5-a861-c89cd78802b9" path="/var/lib/kubelet/pods/a4ac000d-29af-4ff5-a861-c89cd78802b9/volumes" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.183816 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/util/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.390459 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/util/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.391891 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/pull/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.434102 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/pull/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.591271 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/util/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.617593 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/extract/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.622228 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bch2lhd_389409c8-24ce-486a-b03a-8b8770ddedfb/pull/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.785285 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-utilities/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.934091 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-content/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.978583 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-utilities/0.log" Sep 30 10:52:38 crc kubenswrapper[4970]: I0930 10:52:38.982045 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-content/0.log" Sep 30 10:52:39 crc kubenswrapper[4970]: I0930 10:52:39.744559 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-utilities/0.log" Sep 30 10:52:39 crc kubenswrapper[4970]: I0930 10:52:39.772448 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/extract-content/0.log" Sep 30 10:52:39 crc kubenswrapper[4970]: I0930 10:52:39.967065 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-utilities/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.234729 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kxvhd_aca1e80b-4b87-493b-9024-c063fc5fa638/registry-server/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.249182 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-utilities/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.257708 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-content/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.307435 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-content/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.438439 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-utilities/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.451509 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/extract-content/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.680291 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/util/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.847251 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/pull/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.922302 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/util/0.log" Sep 30 10:52:40 crc kubenswrapper[4970]: I0930 10:52:40.950661 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/pull/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.127230 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/util/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.143729 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/pull/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.191625 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-ft8rk_b1af6628-add8-425c-b470-b8c413f69624/registry-server/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.220538 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96r75t9_1f33dea7-5310-40ed-9afc-243a4353a42b/extract/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.280507 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mc94p_d71db2c5-c1c2-42f9-a89e-086c606b9e5f/marketplace-operator/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.382080 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-utilities/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.518724 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-content/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.524602 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-utilities/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.535720 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-content/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.747347 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-utilities/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.751002 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/extract-content/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.785451 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/extract-utilities/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.889806 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-fflcj_bab7817b-f28e-447d-98f5-8fb66262d7ec/registry-server/0.log" Sep 30 10:52:41 crc kubenswrapper[4970]: I0930 10:52:41.965842 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/extract-utilities/0.log" Sep 30 10:52:42 crc kubenswrapper[4970]: I0930 10:52:42.001304 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/extract-content/0.log" Sep 30 10:52:42 crc kubenswrapper[4970]: I0930 10:52:42.010033 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/extract-content/0.log" Sep 30 10:52:42 crc kubenswrapper[4970]: I0930 10:52:42.168349 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/extract-utilities/0.log" Sep 30 10:52:42 crc kubenswrapper[4970]: I0930 10:52:42.196594 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/extract-content/0.log" Sep 30 10:52:42 crc kubenswrapper[4970]: I0930 10:52:42.302972 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-vhmrb_53479960-137c-4f2e-88be-b708ada9056f/registry-server/0.log" Sep 30 10:52:47 crc kubenswrapper[4970]: I0930 10:52:47.674247 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:52:47 crc kubenswrapper[4970]: E0930 10:52:47.674938 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:53:01 crc kubenswrapper[4970]: I0930 10:53:01.672440 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:53:01 crc kubenswrapper[4970]: E0930 10:53:01.673120 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:53:04 crc kubenswrapper[4970]: E0930 10:53:04.117691 4970 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.132:59816->38.102.83.132:42257: read tcp 38.102.83.132:59816->38.102.83.132:42257: read: connection reset by peer Sep 30 10:53:13 crc kubenswrapper[4970]: I0930 10:53:13.668438 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:53:13 crc kubenswrapper[4970]: E0930 10:53:13.669244 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:53:27 crc kubenswrapper[4970]: I0930 10:53:27.675567 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:53:27 crc kubenswrapper[4970]: E0930 10:53:27.677050 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:53:39 crc kubenswrapper[4970]: I0930 10:53:39.669754 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:53:39 crc kubenswrapper[4970]: E0930 10:53:39.670956 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:53:52 crc kubenswrapper[4970]: I0930 10:53:52.669242 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:53:52 crc kubenswrapper[4970]: E0930 10:53:52.670075 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:54:05 crc kubenswrapper[4970]: I0930 10:54:05.668941 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:54:05 crc kubenswrapper[4970]: E0930 10:54:05.669704 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:54:20 crc kubenswrapper[4970]: I0930 10:54:20.669305 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:54:20 crc kubenswrapper[4970]: E0930 10:54:20.669859 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:54:35 crc kubenswrapper[4970]: I0930 10:54:35.669312 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:54:35 crc kubenswrapper[4970]: E0930 10:54:35.670347 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:54:38 crc kubenswrapper[4970]: I0930 10:54:38.990323 4970 generic.go:334] "Generic (PLEG): container finished" podID="8bd969ad-610a-4fa4-a8e9-39ec6f63589e" containerID="765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e" exitCode=0 Sep 30 10:54:38 crc kubenswrapper[4970]: I0930 10:54:38.990439 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-xk4t7/must-gather-6pc88" event={"ID":"8bd969ad-610a-4fa4-a8e9-39ec6f63589e","Type":"ContainerDied","Data":"765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e"} Sep 30 10:54:38 crc kubenswrapper[4970]: I0930 10:54:38.991342 4970 scope.go:117] "RemoveContainer" containerID="765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e" Sep 30 10:54:39 crc kubenswrapper[4970]: I0930 10:54:39.120644 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xk4t7_must-gather-6pc88_8bd969ad-610a-4fa4-a8e9-39ec6f63589e/gather/0.log" Sep 30 10:54:46 crc kubenswrapper[4970]: I0930 10:54:46.669149 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:54:46 crc kubenswrapper[4970]: E0930 10:54:46.670023 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:54:50 crc kubenswrapper[4970]: I0930 10:54:50.652707 4970 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-xk4t7/must-gather-6pc88"] Sep 30 10:54:50 crc kubenswrapper[4970]: I0930 10:54:50.653563 4970 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-xk4t7/must-gather-6pc88" podUID="8bd969ad-610a-4fa4-a8e9-39ec6f63589e" containerName="copy" containerID="cri-o://4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228" gracePeriod=2 Sep 30 10:54:50 crc kubenswrapper[4970]: I0930 10:54:50.662983 4970 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-xk4t7/must-gather-6pc88"] Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.080904 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xk4t7_must-gather-6pc88_8bd969ad-610a-4fa4-a8e9-39ec6f63589e/copy/0.log" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.081798 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.153731 4970 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-xk4t7_must-gather-6pc88_8bd969ad-610a-4fa4-a8e9-39ec6f63589e/copy/0.log" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.154352 4970 generic.go:334] "Generic (PLEG): container finished" podID="8bd969ad-610a-4fa4-a8e9-39ec6f63589e" containerID="4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228" exitCode=143 Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.154412 4970 scope.go:117] "RemoveContainer" containerID="4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.154587 4970 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-xk4t7/must-gather-6pc88" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.185511 4970 scope.go:117] "RemoveContainer" containerID="765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.192186 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-must-gather-output\") pod \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.192502 4970 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktxtw\" (UniqueName: \"kubernetes.io/projected/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-kube-api-access-ktxtw\") pod \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\" (UID: \"8bd969ad-610a-4fa4-a8e9-39ec6f63589e\") " Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.201156 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-kube-api-access-ktxtw" (OuterVolumeSpecName: "kube-api-access-ktxtw") pod "8bd969ad-610a-4fa4-a8e9-39ec6f63589e" (UID: "8bd969ad-610a-4fa4-a8e9-39ec6f63589e"). InnerVolumeSpecName "kube-api-access-ktxtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.260977 4970 scope.go:117] "RemoveContainer" containerID="4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228" Sep 30 10:54:51 crc kubenswrapper[4970]: E0930 10:54:51.264423 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228\": container with ID starting with 4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228 not found: ID does not exist" containerID="4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.264475 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228"} err="failed to get container status \"4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228\": rpc error: code = NotFound desc = could not find container \"4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228\": container with ID starting with 4127cf339d2c4b1aac83469561563da2b00488c3b92680d9887ecf56555c5228 not found: ID does not exist" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.264557 4970 scope.go:117] "RemoveContainer" containerID="765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e" Sep 30 10:54:51 crc kubenswrapper[4970]: E0930 10:54:51.265047 4970 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e\": container with ID starting with 765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e not found: ID does not exist" containerID="765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.265134 4970 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e"} err="failed to get container status \"765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e\": rpc error: code = NotFound desc = could not find container \"765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e\": container with ID starting with 765959a132c24498f1980d1b26674d71f493d381796fc29f5de42d8ce7d09f7e not found: ID does not exist" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.297196 4970 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktxtw\" (UniqueName: \"kubernetes.io/projected/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-kube-api-access-ktxtw\") on node \"crc\" DevicePath \"\"" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.359183 4970 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8bd969ad-610a-4fa4-a8e9-39ec6f63589e" (UID: "8bd969ad-610a-4fa4-a8e9-39ec6f63589e"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.401419 4970 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8bd969ad-610a-4fa4-a8e9-39ec6f63589e-must-gather-output\") on node \"crc\" DevicePath \"\"" Sep 30 10:54:51 crc kubenswrapper[4970]: I0930 10:54:51.681840 4970 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bd969ad-610a-4fa4-a8e9-39ec6f63589e" path="/var/lib/kubelet/pods/8bd969ad-610a-4fa4-a8e9-39ec6f63589e/volumes" Sep 30 10:54:58 crc kubenswrapper[4970]: I0930 10:54:58.669067 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:54:58 crc kubenswrapper[4970]: E0930 10:54:58.669758 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:55:10 crc kubenswrapper[4970]: I0930 10:55:10.669095 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:55:10 crc kubenswrapper[4970]: E0930 10:55:10.670345 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:55:24 crc kubenswrapper[4970]: I0930 10:55:24.669087 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:55:24 crc kubenswrapper[4970]: E0930 10:55:24.669892 4970 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-gcphg_openshift-machine-config-operator(92198682-93fe-4b8a-8b03-bb768b56a129)\"" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" podUID="92198682-93fe-4b8a-8b03-bb768b56a129" Sep 30 10:55:39 crc kubenswrapper[4970]: I0930 10:55:39.676698 4970 scope.go:117] "RemoveContainer" containerID="6c7e2f339fc3b54de7601dbc598bc4c38b00e98ee9f2adb86f71a6130ca9aed2" Sep 30 10:55:40 crc kubenswrapper[4970]: I0930 10:55:40.626963 4970 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gcphg" event={"ID":"92198682-93fe-4b8a-8b03-bb768b56a129","Type":"ContainerStarted","Data":"f28088f91200d99b7f4cae4f455f5043696f9fa79cb1bc905f7d959b558d1245"} Sep 30 10:56:03 crc kubenswrapper[4970]: I0930 10:56:03.257837 4970 scope.go:117] "RemoveContainer" containerID="658f37273f47ac14415581b069e398db7b06b6026ce5cf52d4923ebfc8c1663c"